I0428 00:01:05.107261 7 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0428 00:01:05.107498 7 e2e.go:124] Starting e2e run "ef00971f-cbe2-484d-8239-0eb176cdbbbc" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1588032064 - Will randomize all specs Will run 275 of 4992 specs Apr 28 00:01:05.159: INFO: >>> kubeConfig: /root/.kube/config Apr 28 00:01:05.164: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 28 00:01:05.187: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 28 00:01:05.218: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 28 00:01:05.218: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 28 00:01:05.218: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 28 00:01:05.224: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 28 00:01:05.224: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 28 00:01:05.224: INFO: e2e test version: v1.19.0-alpha.0.779+84dc7046797aad Apr 28 00:01:05.225: INFO: kube-apiserver version: v1.17.0 Apr 28 00:01:05.225: INFO: >>> kubeConfig: /root/.kube/config Apr 28 00:01:05.228: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:01:05.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook Apr 28 00:01:05.326: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 28 00:01:05.830: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 28 00:01:07.841: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723628865, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723628865, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723628865, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723628865, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 28 00:01:09.845: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723628865, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723628865, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723628865, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723628865, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 28 00:01:12.854: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 00:01:12.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:01:14.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-48" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:9.010 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":1,"skipped":13,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:01:14.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-f2406829-541e-489a-8128-94bc664f81f0 STEP: Creating a pod to test consume secrets Apr 28 00:01:14.336: INFO: Waiting up to 5m0s for pod "pod-secrets-9c7a003e-ad10-42fd-b7d6-bb604f1c6b17" in namespace "secrets-2614" to be "Succeeded or Failed" Apr 28 00:01:14.362: INFO: Pod "pod-secrets-9c7a003e-ad10-42fd-b7d6-bb604f1c6b17": Phase="Pending", Reason="", readiness=false. Elapsed: 25.209385ms Apr 28 00:01:16.366: INFO: Pod "pod-secrets-9c7a003e-ad10-42fd-b7d6-bb604f1c6b17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029828462s Apr 28 00:01:18.370: INFO: Pod "pod-secrets-9c7a003e-ad10-42fd-b7d6-bb604f1c6b17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033960678s STEP: Saw pod success Apr 28 00:01:18.370: INFO: Pod "pod-secrets-9c7a003e-ad10-42fd-b7d6-bb604f1c6b17" satisfied condition "Succeeded or Failed" Apr 28 00:01:18.373: INFO: Trying to get logs from node latest-worker pod pod-secrets-9c7a003e-ad10-42fd-b7d6-bb604f1c6b17 container secret-volume-test: STEP: delete the pod Apr 28 00:01:18.442: INFO: Waiting for pod pod-secrets-9c7a003e-ad10-42fd-b7d6-bb604f1c6b17 to disappear Apr 28 00:01:18.453: INFO: Pod pod-secrets-9c7a003e-ad10-42fd-b7d6-bb604f1c6b17 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:01:18.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2614" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":2,"skipped":23,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:01:18.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-2f4a0a14-0e1e-43a2-acb3-643a8a544dfb STEP: Creating a pod to test consume configMaps Apr 28 00:01:18.532: INFO: Waiting up to 5m0s for pod "pod-configmaps-3189ae01-23a1-4fdd-a40c-1df18ba4af38" in namespace "configmap-7693" to be "Succeeded or Failed" Apr 28 00:01:18.537: INFO: Pod "pod-configmaps-3189ae01-23a1-4fdd-a40c-1df18ba4af38": Phase="Pending", Reason="", readiness=false. Elapsed: 4.331596ms Apr 28 00:01:20.646: INFO: Pod "pod-configmaps-3189ae01-23a1-4fdd-a40c-1df18ba4af38": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113344704s Apr 28 00:01:22.689: INFO: Pod "pod-configmaps-3189ae01-23a1-4fdd-a40c-1df18ba4af38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.156664848s STEP: Saw pod success Apr 28 00:01:22.689: INFO: Pod "pod-configmaps-3189ae01-23a1-4fdd-a40c-1df18ba4af38" satisfied condition "Succeeded or Failed" Apr 28 00:01:22.692: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-3189ae01-23a1-4fdd-a40c-1df18ba4af38 container configmap-volume-test: STEP: delete the pod Apr 28 00:01:22.789: INFO: Waiting for pod pod-configmaps-3189ae01-23a1-4fdd-a40c-1df18ba4af38 to disappear Apr 28 00:01:22.814: INFO: Pod pod-configmaps-3189ae01-23a1-4fdd-a40c-1df18ba4af38 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:01:22.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7693" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":3,"skipped":42,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:01:22.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 28 00:01:22.957: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:01:22.962: INFO: Number of nodes with available pods: 0 Apr 28 00:01:22.962: INFO: Node latest-worker is running more than one daemon pod Apr 28 00:01:23.967: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:01:23.970: INFO: Number of nodes with available pods: 0 Apr 28 00:01:23.970: INFO: Node latest-worker is running more than one daemon pod Apr 28 00:01:24.967: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:01:24.971: INFO: Number of nodes with available pods: 0 Apr 28 00:01:24.971: INFO: Node latest-worker is running more than one daemon pod Apr 28 00:01:25.967: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:01:25.969: INFO: Number of nodes with available pods: 0 Apr 28 00:01:25.969: INFO: Node latest-worker is running more than one daemon pod Apr 28 00:01:26.966: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:01:26.970: INFO: Number of nodes with available pods: 2 Apr 28 00:01:26.970: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 28 00:01:27.013: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:01:27.016: INFO: Number of nodes with available pods: 1 Apr 28 00:01:27.016: INFO: Node latest-worker is running more than one daemon pod Apr 28 00:01:28.020: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:01:28.023: INFO: Number of nodes with available pods: 1 Apr 28 00:01:28.023: INFO: Node latest-worker is running more than one daemon pod Apr 28 00:01:29.021: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:01:29.024: INFO: Number of nodes with available pods: 1 Apr 28 00:01:29.024: INFO: Node latest-worker is running more than one daemon pod Apr 28 00:01:30.022: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:01:30.026: INFO: Number of nodes with available pods: 1 Apr 28 00:01:30.026: INFO: Node latest-worker is running more than one daemon pod Apr 28 00:01:31.021: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:01:31.024: INFO: Number of nodes with available pods: 1 Apr 28 00:01:31.024: INFO: Node latest-worker is running more than one daemon pod Apr 28 00:01:32.021: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:01:32.026: INFO: Number of nodes with available pods: 1 Apr 28 00:01:32.026: INFO: Node latest-worker is running more than one daemon pod Apr 28 00:01:33.020: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:01:33.024: INFO: Number of nodes with available pods: 1 Apr 28 00:01:33.024: INFO: Node latest-worker is running more than one daemon pod Apr 28 00:01:34.021: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:01:34.026: INFO: Number of nodes with available pods: 1 Apr 28 00:01:34.026: INFO: Node latest-worker is running more than one daemon pod Apr 28 00:01:35.026: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:01:35.029: INFO: Number of nodes with available pods: 1 Apr 28 00:01:35.029: INFO: Node latest-worker is running more than one daemon pod Apr 28 00:01:36.021: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:01:36.025: INFO: Number of nodes with available pods: 2 Apr 28 00:01:36.025: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1532, will wait for the garbage collector to delete the pods Apr 28 00:01:36.088: INFO: Deleting DaemonSet.extensions daemon-set took: 7.16238ms Apr 28 00:01:36.389: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.268559ms Apr 28 00:01:43.106: INFO: Number of nodes with available pods: 0 Apr 28 00:01:43.106: INFO: Number of running nodes: 0, number of available pods: 0 Apr 28 00:01:43.111: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1532/daemonsets","resourceVersion":"11575638"},"items":null} Apr 28 00:01:43.114: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1532/pods","resourceVersion":"11575638"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:01:43.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1532" for this suite. • [SLOW TEST:20.312 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":4,"skipped":53,"failed":0} [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:01:43.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-821 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-821 I0428 00:01:43.297994 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-821, replica count: 2 I0428 00:01:46.348453 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0428 00:01:49.348726 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 28 00:01:49.348: INFO: Creating new exec pod Apr 28 00:01:54.367: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-821 execpod8rgzg -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 28 00:01:56.863: INFO: stderr: "I0428 00:01:56.762182 30 log.go:172] (0xc00003a580) (0xc0007960a0) Create stream\nI0428 00:01:56.762244 30 log.go:172] (0xc00003a580) (0xc0007960a0) Stream added, broadcasting: 1\nI0428 00:01:56.764977 30 log.go:172] (0xc00003a580) Reply frame received for 1\nI0428 00:01:56.765007 30 log.go:172] (0xc00003a580) (0xc000806000) Create stream\nI0428 00:01:56.765013 30 log.go:172] (0xc00003a580) (0xc000806000) Stream added, broadcasting: 3\nI0428 00:01:56.766091 30 log.go:172] (0xc00003a580) Reply frame received for 3\nI0428 00:01:56.766154 30 log.go:172] (0xc00003a580) (0xc00081a000) Create stream\nI0428 00:01:56.766169 30 log.go:172] (0xc00003a580) (0xc00081a000) Stream added, broadcasting: 5\nI0428 00:01:56.767089 30 log.go:172] (0xc00003a580) Reply frame received for 5\nI0428 00:01:56.853929 30 log.go:172] (0xc00003a580) Data frame received for 5\nI0428 00:01:56.853976 30 log.go:172] (0xc00081a000) (5) Data frame handling\nI0428 00:01:56.854015 30 log.go:172] (0xc00081a000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0428 00:01:56.854271 30 log.go:172] (0xc00003a580) Data frame received for 5\nI0428 00:01:56.854296 30 log.go:172] (0xc00081a000) (5) Data frame handling\nI0428 00:01:56.854329 30 log.go:172] (0xc00081a000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0428 00:01:56.855348 30 log.go:172] (0xc00003a580) Data frame received for 5\nI0428 00:01:56.855377 30 log.go:172] (0xc00081a000) (5) Data frame handling\nI0428 00:01:56.855446 30 log.go:172] (0xc00003a580) Data frame received for 3\nI0428 00:01:56.855490 30 log.go:172] (0xc000806000) (3) Data frame handling\nI0428 00:01:56.857428 30 log.go:172] (0xc00003a580) Data frame received for 1\nI0428 00:01:56.857571 30 log.go:172] (0xc0007960a0) (1) Data frame handling\nI0428 00:01:56.857629 30 log.go:172] (0xc0007960a0) (1) Data frame sent\nI0428 00:01:56.857661 30 log.go:172] (0xc00003a580) (0xc0007960a0) Stream removed, broadcasting: 1\nI0428 00:01:56.857714 30 log.go:172] (0xc00003a580) Go away received\nI0428 00:01:56.858074 30 log.go:172] (0xc00003a580) (0xc0007960a0) Stream removed, broadcasting: 1\nI0428 00:01:56.858096 30 log.go:172] (0xc00003a580) (0xc000806000) Stream removed, broadcasting: 3\nI0428 00:01:56.858109 30 log.go:172] (0xc00003a580) (0xc00081a000) Stream removed, broadcasting: 5\n" Apr 28 00:01:56.863: INFO: stdout: "" Apr 28 00:01:56.864: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-821 execpod8rgzg -- /bin/sh -x -c nc -zv -t -w 2 10.96.86.29 80' Apr 28 00:01:57.073: INFO: stderr: "I0428 00:01:56.991877 60 log.go:172] (0xc00003a420) (0xc00089a000) Create stream\nI0428 00:01:56.991941 60 log.go:172] (0xc00003a420) (0xc00089a000) Stream added, broadcasting: 1\nI0428 00:01:56.994577 60 log.go:172] (0xc00003a420) Reply frame received for 1\nI0428 00:01:56.994631 60 log.go:172] (0xc00003a420) (0xc0003ceb40) Create stream\nI0428 00:01:56.994673 60 log.go:172] (0xc00003a420) (0xc0003ceb40) Stream added, broadcasting: 3\nI0428 00:01:56.995595 60 log.go:172] (0xc00003a420) Reply frame received for 3\nI0428 00:01:56.995633 60 log.go:172] (0xc00003a420) (0xc00089a0a0) Create stream\nI0428 00:01:56.995643 60 log.go:172] (0xc00003a420) (0xc00089a0a0) Stream added, broadcasting: 5\nI0428 00:01:56.996577 60 log.go:172] (0xc00003a420) Reply frame received for 5\nI0428 00:01:57.066562 60 log.go:172] (0xc00003a420) Data frame received for 3\nI0428 00:01:57.066590 60 log.go:172] (0xc0003ceb40) (3) Data frame handling\nI0428 00:01:57.066623 60 log.go:172] (0xc00003a420) Data frame received for 5\nI0428 00:01:57.066663 60 log.go:172] (0xc00089a0a0) (5) Data frame handling\nI0428 00:01:57.066686 60 log.go:172] (0xc00089a0a0) (5) Data frame sent\nI0428 00:01:57.066699 60 log.go:172] (0xc00003a420) Data frame received for 5\nI0428 00:01:57.066709 60 log.go:172] (0xc00089a0a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.86.29 80\nConnection to 10.96.86.29 80 port [tcp/http] succeeded!\nI0428 00:01:57.068056 60 log.go:172] (0xc00003a420) Data frame received for 1\nI0428 00:01:57.068089 60 log.go:172] (0xc00089a000) (1) Data frame handling\nI0428 00:01:57.068109 60 log.go:172] (0xc00089a000) (1) Data frame sent\nI0428 00:01:57.068126 60 log.go:172] (0xc00003a420) (0xc00089a000) Stream removed, broadcasting: 1\nI0428 00:01:57.068143 60 log.go:172] (0xc00003a420) Go away received\nI0428 00:01:57.068530 60 log.go:172] (0xc00003a420) (0xc00089a000) Stream removed, broadcasting: 1\nI0428 00:01:57.068552 60 log.go:172] (0xc00003a420) (0xc0003ceb40) Stream removed, broadcasting: 3\nI0428 00:01:57.068563 60 log.go:172] (0xc00003a420) (0xc00089a0a0) Stream removed, broadcasting: 5\n" Apr 28 00:01:57.074: INFO: stdout: "" Apr 28 00:01:57.074: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-821 execpod8rgzg -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31651' Apr 28 00:01:57.293: INFO: stderr: "I0428 00:01:57.202527 80 log.go:172] (0xc0008e49a0) (0xc00028f400) Create stream\nI0428 00:01:57.202583 80 log.go:172] (0xc0008e49a0) (0xc00028f400) Stream added, broadcasting: 1\nI0428 00:01:57.213896 80 log.go:172] (0xc0008e49a0) Reply frame received for 1\nI0428 00:01:57.213998 80 log.go:172] (0xc0008e49a0) (0xc00071c000) Create stream\nI0428 00:01:57.214032 80 log.go:172] (0xc0008e49a0) (0xc00071c000) Stream added, broadcasting: 3\nI0428 00:01:57.215606 80 log.go:172] (0xc0008e49a0) Reply frame received for 3\nI0428 00:01:57.215693 80 log.go:172] (0xc0008e49a0) (0xc0004ec000) Create stream\nI0428 00:01:57.215757 80 log.go:172] (0xc0008e49a0) (0xc0004ec000) Stream added, broadcasting: 5\nI0428 00:01:57.218400 80 log.go:172] (0xc0008e49a0) Reply frame received for 5\nI0428 00:01:57.285001 80 log.go:172] (0xc0008e49a0) Data frame received for 5\nI0428 00:01:57.285052 80 log.go:172] (0xc0004ec000) (5) Data frame handling\nI0428 00:01:57.285088 80 log.go:172] (0xc0004ec000) (5) Data frame sent\nI0428 00:01:57.285107 80 log.go:172] (0xc0008e49a0) Data frame received for 5\n+ nc -zv -t -w 2 172.17.0.13 31651\nI0428 00:01:57.285275 80 log.go:172] (0xc0004ec000) (5) Data frame handling\nI0428 00:01:57.285300 80 log.go:172] (0xc0004ec000) (5) Data frame sent\nConnection to 172.17.0.13 31651 port [tcp/31651] succeeded!\nI0428 00:01:57.285612 80 log.go:172] (0xc0008e49a0) Data frame received for 3\nI0428 00:01:57.285659 80 log.go:172] (0xc00071c000) (3) Data frame handling\nI0428 00:01:57.285763 80 log.go:172] (0xc0008e49a0) Data frame received for 5\nI0428 00:01:57.285801 80 log.go:172] (0xc0004ec000) (5) Data frame handling\nI0428 00:01:57.287363 80 log.go:172] (0xc0008e49a0) Data frame received for 1\nI0428 00:01:57.287408 80 log.go:172] (0xc00028f400) (1) Data frame handling\nI0428 00:01:57.287432 80 log.go:172] (0xc00028f400) (1) Data frame sent\nI0428 00:01:57.287450 80 log.go:172] (0xc0008e49a0) (0xc00028f400) Stream removed, broadcasting: 1\nI0428 00:01:57.287481 80 log.go:172] (0xc0008e49a0) Go away received\nI0428 00:01:57.287944 80 log.go:172] (0xc0008e49a0) (0xc00028f400) Stream removed, broadcasting: 1\nI0428 00:01:57.287967 80 log.go:172] (0xc0008e49a0) (0xc00071c000) Stream removed, broadcasting: 3\nI0428 00:01:57.287987 80 log.go:172] (0xc0008e49a0) (0xc0004ec000) Stream removed, broadcasting: 5\n" Apr 28 00:01:57.293: INFO: stdout: "" Apr 28 00:01:57.293: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-821 execpod8rgzg -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31651' Apr 28 00:01:57.518: INFO: stderr: "I0428 00:01:57.432680 101 log.go:172] (0xc00003b970) (0xc0006555e0) Create stream\nI0428 00:01:57.432772 101 log.go:172] (0xc00003b970) (0xc0006555e0) Stream added, broadcasting: 1\nI0428 00:01:57.435158 101 log.go:172] (0xc00003b970) Reply frame received for 1\nI0428 00:01:57.435186 101 log.go:172] (0xc00003b970) (0xc000655680) Create stream\nI0428 00:01:57.435195 101 log.go:172] (0xc00003b970) (0xc000655680) Stream added, broadcasting: 3\nI0428 00:01:57.435936 101 log.go:172] (0xc00003b970) Reply frame received for 3\nI0428 00:01:57.435990 101 log.go:172] (0xc00003b970) (0xc000655720) Create stream\nI0428 00:01:57.436007 101 log.go:172] (0xc00003b970) (0xc000655720) Stream added, broadcasting: 5\nI0428 00:01:57.436714 101 log.go:172] (0xc00003b970) Reply frame received for 5\nI0428 00:01:57.508912 101 log.go:172] (0xc00003b970) Data frame received for 5\nI0428 00:01:57.508947 101 log.go:172] (0xc000655720) (5) Data frame handling\nI0428 00:01:57.508965 101 log.go:172] (0xc000655720) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 31651\nI0428 00:01:57.509468 101 log.go:172] (0xc00003b970) Data frame received for 5\nI0428 00:01:57.509492 101 log.go:172] (0xc000655720) (5) Data frame handling\nI0428 00:01:57.509507 101 log.go:172] (0xc000655720) (5) Data frame sent\nConnection to 172.17.0.12 31651 port [tcp/31651] succeeded!\nI0428 00:01:57.509780 101 log.go:172] (0xc00003b970) Data frame received for 5\nI0428 00:01:57.509831 101 log.go:172] (0xc000655720) (5) Data frame handling\nI0428 00:01:57.510289 101 log.go:172] (0xc00003b970) Data frame received for 3\nI0428 00:01:57.510339 101 log.go:172] (0xc000655680) (3) Data frame handling\nI0428 00:01:57.511171 101 log.go:172] (0xc00003b970) Data frame received for 1\nI0428 00:01:57.511198 101 log.go:172] (0xc0006555e0) (1) Data frame handling\nI0428 00:01:57.511213 101 log.go:172] (0xc0006555e0) (1) Data frame sent\nI0428 00:01:57.511577 101 log.go:172] (0xc00003b970) (0xc0006555e0) Stream removed, broadcasting: 1\nI0428 00:01:57.511983 101 log.go:172] (0xc00003b970) (0xc0006555e0) Stream removed, broadcasting: 1\nI0428 00:01:57.512006 101 log.go:172] (0xc00003b970) (0xc000655680) Stream removed, broadcasting: 3\nI0428 00:01:57.513421 101 log.go:172] (0xc00003b970) Go away received\nI0428 00:01:57.513572 101 log.go:172] (0xc00003b970) (0xc000655720) Stream removed, broadcasting: 5\n" Apr 28 00:01:57.518: INFO: stdout: "" Apr 28 00:01:57.518: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:01:57.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-821" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:14.451 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":5,"skipped":53,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:01:57.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 28 00:01:57.648: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 28 00:01:57.670: INFO: Waiting for terminating namespaces to be deleted... Apr 28 00:01:57.673: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 28 00:01:57.690: INFO: externalname-service-v4psn from services-821 started at 2020-04-28 00:01:43 +0000 UTC (1 container statuses recorded) Apr 28 00:01:57.690: INFO: Container externalname-service ready: true, restart count 0 Apr 28 00:01:57.690: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 28 00:01:57.690: INFO: Container kindnet-cni ready: true, restart count 0 Apr 28 00:01:57.690: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 28 00:01:57.690: INFO: Container kube-proxy ready: true, restart count 0 Apr 28 00:01:57.690: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 28 00:01:57.696: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 28 00:01:57.696: INFO: Container kindnet-cni ready: true, restart count 0 Apr 28 00:01:57.696: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 28 00:01:57.696: INFO: Container kube-proxy ready: true, restart count 0 Apr 28 00:01:57.696: INFO: externalname-service-jjdpv from services-821 started at 2020-04-28 00:01:43 +0000 UTC (1 container statuses recorded) Apr 28 00:01:57.696: INFO: Container externalname-service ready: true, restart count 0 Apr 28 00:01:57.696: INFO: execpod8rgzg from services-821 started at 2020-04-28 00:01:49 +0000 UTC (1 container statuses recorded) Apr 28 00:01:57.696: INFO: Container agnhost-pause ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-6bf974a4-6d87-41d5-af95-1007d09bd779 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-6bf974a4-6d87-41d5-af95-1007d09bd779 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-6bf974a4-6d87-41d5-af95-1007d09bd779 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:02:13.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2524" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:16.318 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":6,"skipped":65,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:02:13.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8335.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8335.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8335.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8335.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 28 00:02:20.124: INFO: DNS probes using dns-test-6b6f530c-952b-49d0-aaba-16b5966794fb succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8335.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8335.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8335.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8335.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 28 00:02:28.533: INFO: File wheezy_udp@dns-test-service-3.dns-8335.svc.cluster.local from pod dns-8335/dns-test-015aeece-6d82-4322-b2d9-d9bf5c60ef7c contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 28 00:02:28.537: INFO: File jessie_udp@dns-test-service-3.dns-8335.svc.cluster.local from pod dns-8335/dns-test-015aeece-6d82-4322-b2d9-d9bf5c60ef7c contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 28 00:02:28.537: INFO: Lookups using dns-8335/dns-test-015aeece-6d82-4322-b2d9-d9bf5c60ef7c failed for: [wheezy_udp@dns-test-service-3.dns-8335.svc.cluster.local jessie_udp@dns-test-service-3.dns-8335.svc.cluster.local] Apr 28 00:02:33.541: INFO: File wheezy_udp@dns-test-service-3.dns-8335.svc.cluster.local from pod dns-8335/dns-test-015aeece-6d82-4322-b2d9-d9bf5c60ef7c contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 28 00:02:33.545: INFO: File jessie_udp@dns-test-service-3.dns-8335.svc.cluster.local from pod dns-8335/dns-test-015aeece-6d82-4322-b2d9-d9bf5c60ef7c contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 28 00:02:33.545: INFO: Lookups using dns-8335/dns-test-015aeece-6d82-4322-b2d9-d9bf5c60ef7c failed for: [wheezy_udp@dns-test-service-3.dns-8335.svc.cluster.local jessie_udp@dns-test-service-3.dns-8335.svc.cluster.local] Apr 28 00:02:38.552: INFO: File wheezy_udp@dns-test-service-3.dns-8335.svc.cluster.local from pod dns-8335/dns-test-015aeece-6d82-4322-b2d9-d9bf5c60ef7c contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 28 00:02:38.556: INFO: File jessie_udp@dns-test-service-3.dns-8335.svc.cluster.local from pod dns-8335/dns-test-015aeece-6d82-4322-b2d9-d9bf5c60ef7c contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 28 00:02:38.556: INFO: Lookups using dns-8335/dns-test-015aeece-6d82-4322-b2d9-d9bf5c60ef7c failed for: [wheezy_udp@dns-test-service-3.dns-8335.svc.cluster.local jessie_udp@dns-test-service-3.dns-8335.svc.cluster.local] Apr 28 00:02:43.542: INFO: File wheezy_udp@dns-test-service-3.dns-8335.svc.cluster.local from pod dns-8335/dns-test-015aeece-6d82-4322-b2d9-d9bf5c60ef7c contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 28 00:02:43.545: INFO: File jessie_udp@dns-test-service-3.dns-8335.svc.cluster.local from pod dns-8335/dns-test-015aeece-6d82-4322-b2d9-d9bf5c60ef7c contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 28 00:02:43.545: INFO: Lookups using dns-8335/dns-test-015aeece-6d82-4322-b2d9-d9bf5c60ef7c failed for: [wheezy_udp@dns-test-service-3.dns-8335.svc.cluster.local jessie_udp@dns-test-service-3.dns-8335.svc.cluster.local] Apr 28 00:02:48.546: INFO: File jessie_udp@dns-test-service-3.dns-8335.svc.cluster.local from pod dns-8335/dns-test-015aeece-6d82-4322-b2d9-d9bf5c60ef7c contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 28 00:02:48.546: INFO: Lookups using dns-8335/dns-test-015aeece-6d82-4322-b2d9-d9bf5c60ef7c failed for: [jessie_udp@dns-test-service-3.dns-8335.svc.cluster.local] Apr 28 00:02:53.546: INFO: DNS probes using dns-test-015aeece-6d82-4322-b2d9-d9bf5c60ef7c succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8335.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8335.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8335.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8335.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 28 00:03:00.057: INFO: File wheezy_udp@dns-test-service-3.dns-8335.svc.cluster.local from pod dns-8335/dns-test-55f84515-d4b0-4a45-bf1d-48bac836ea04 contains '' instead of '10.96.121.177' Apr 28 00:03:00.060: INFO: Lookups using dns-8335/dns-test-55f84515-d4b0-4a45-bf1d-48bac836ea04 failed for: [wheezy_udp@dns-test-service-3.dns-8335.svc.cluster.local] Apr 28 00:03:05.070: INFO: DNS probes using dns-test-55f84515-d4b0-4a45-bf1d-48bac836ea04 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:03:05.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8335" for this suite. • [SLOW TEST:51.570 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":7,"skipped":88,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:03:05.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 28 00:03:06.258: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 28 00:03:08.294: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723628986, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723628986, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723628986, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723628986, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 28 00:03:10.298: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723628986, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723628986, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723628986, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723628986, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 28 00:03:13.325: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:03:23.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-979" for this suite. STEP: Destroying namespace "webhook-979-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.065 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":8,"skipped":89,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:03:23.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:03:36.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2332" for this suite. • [SLOW TEST:13.158 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":9,"skipped":107,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:03:36.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Apr 28 00:03:36.749: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5239' Apr 28 00:03:37.062: INFO: stderr: "" Apr 28 00:03:37.062: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 28 00:03:37.063: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5239' Apr 28 00:03:37.163: INFO: stderr: "" Apr 28 00:03:37.163: INFO: stdout: "update-demo-nautilus-8mxhr update-demo-nautilus-pp5jg " Apr 28 00:03:37.163: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8mxhr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5239' Apr 28 00:03:37.256: INFO: stderr: "" Apr 28 00:03:37.256: INFO: stdout: "" Apr 28 00:03:37.256: INFO: update-demo-nautilus-8mxhr is created but not running Apr 28 00:03:42.256: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5239' Apr 28 00:03:42.359: INFO: stderr: "" Apr 28 00:03:42.359: INFO: stdout: "update-demo-nautilus-8mxhr update-demo-nautilus-pp5jg " Apr 28 00:03:42.359: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8mxhr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5239' Apr 28 00:03:42.443: INFO: stderr: "" Apr 28 00:03:42.443: INFO: stdout: "true" Apr 28 00:03:42.443: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8mxhr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5239' Apr 28 00:03:42.530: INFO: stderr: "" Apr 28 00:03:42.530: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 28 00:03:42.530: INFO: validating pod update-demo-nautilus-8mxhr Apr 28 00:03:42.534: INFO: got data: { "image": "nautilus.jpg" } Apr 28 00:03:42.534: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 28 00:03:42.534: INFO: update-demo-nautilus-8mxhr is verified up and running Apr 28 00:03:42.534: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pp5jg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5239' Apr 28 00:03:42.634: INFO: stderr: "" Apr 28 00:03:42.634: INFO: stdout: "true" Apr 28 00:03:42.634: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pp5jg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5239' Apr 28 00:03:42.723: INFO: stderr: "" Apr 28 00:03:42.723: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 28 00:03:42.723: INFO: validating pod update-demo-nautilus-pp5jg Apr 28 00:03:42.728: INFO: got data: { "image": "nautilus.jpg" } Apr 28 00:03:42.728: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 28 00:03:42.728: INFO: update-demo-nautilus-pp5jg is verified up and running STEP: using delete to clean up resources Apr 28 00:03:42.728: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5239' Apr 28 00:03:42.832: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 28 00:03:42.832: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 28 00:03:42.833: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5239' Apr 28 00:03:42.925: INFO: stderr: "No resources found in kubectl-5239 namespace.\n" Apr 28 00:03:42.925: INFO: stdout: "" Apr 28 00:03:42.925: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5239 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 28 00:03:43.020: INFO: stderr: "" Apr 28 00:03:43.020: INFO: stdout: "update-demo-nautilus-8mxhr\nupdate-demo-nautilus-pp5jg\n" Apr 28 00:03:43.520: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5239' Apr 28 00:03:43.644: INFO: stderr: "No resources found in kubectl-5239 namespace.\n" Apr 28 00:03:43.644: INFO: stdout: "" Apr 28 00:03:43.644: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5239 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 28 00:03:43.740: INFO: stderr: "" Apr 28 00:03:43.740: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:03:43.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5239" for this suite. • [SLOW TEST:7.046 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":275,"completed":10,"skipped":112,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:03:43.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 28 00:03:44.131: INFO: Waiting up to 5m0s for pod "downwardapi-volume-51c396ed-6df6-4904-b6f3-fe38170b1900" in namespace "projected-5404" to be "Succeeded or Failed" Apr 28 00:03:44.182: INFO: Pod "downwardapi-volume-51c396ed-6df6-4904-b6f3-fe38170b1900": Phase="Pending", Reason="", readiness=false. Elapsed: 51.586683ms Apr 28 00:03:46.187: INFO: Pod "downwardapi-volume-51c396ed-6df6-4904-b6f3-fe38170b1900": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056100942s Apr 28 00:03:48.191: INFO: Pod "downwardapi-volume-51c396ed-6df6-4904-b6f3-fe38170b1900": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060142034s STEP: Saw pod success Apr 28 00:03:48.191: INFO: Pod "downwardapi-volume-51c396ed-6df6-4904-b6f3-fe38170b1900" satisfied condition "Succeeded or Failed" Apr 28 00:03:48.194: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-51c396ed-6df6-4904-b6f3-fe38170b1900 container client-container: STEP: delete the pod Apr 28 00:03:48.291: INFO: Waiting for pod downwardapi-volume-51c396ed-6df6-4904-b6f3-fe38170b1900 to disappear Apr 28 00:03:48.295: INFO: Pod downwardapi-volume-51c396ed-6df6-4904-b6f3-fe38170b1900 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:03:48.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5404" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":11,"skipped":125,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:03:48.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 28 00:03:52.373: INFO: &Pod{ObjectMeta:{send-events-2faaa0ce-50e9-4c64-afeb-cad0b3f95e2d events-8949 /api/v1/namespaces/events-8949/pods/send-events-2faaa0ce-50e9-4c64-afeb-cad0b3f95e2d 71ab77ae-08bd-47a6-aeaf-1bc588e30e70 11576480 0 2020-04-28 00:03:48 +0000 UTC map[name:foo time:342849686] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-skfcm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-skfcm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-skfcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 00:03:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 00:03:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 00:03:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 00:03:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.38,StartTime:2020-04-28 00:03:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-28 00:03:50 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://deade5b19c8af0d67bb6cf1109f623375639858a0f2aedb68d16b700ad37e59f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.38,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Apr 28 00:03:54.378: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 28 00:03:56.382: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:03:56.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8949" for this suite. • [SLOW TEST:8.145 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":275,"completed":12,"skipped":147,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:03:56.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Apr 28 00:03:56.500: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:04:13.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4721" for this suite. • [SLOW TEST:16.611 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":13,"skipped":172,"failed":0} S ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:04:13.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-ae100044-c472-4600-a56d-607b9ff6bf48 in namespace container-probe-760 Apr 28 00:04:17.145: INFO: Started pod liveness-ae100044-c472-4600-a56d-607b9ff6bf48 in namespace container-probe-760 STEP: checking the pod's current state and verifying that restartCount is present Apr 28 00:04:17.148: INFO: Initial restart count of pod liveness-ae100044-c472-4600-a56d-607b9ff6bf48 is 0 Apr 28 00:04:41.202: INFO: Restart count of pod container-probe-760/liveness-ae100044-c472-4600-a56d-607b9ff6bf48 is now 1 (24.054313183s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:04:41.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-760" for this suite. • [SLOW TEST:28.166 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":14,"skipped":173,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:04:41.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-10b33f51-bd0b-443e-bf08-408ed02adccd STEP: Creating a pod to test consume configMaps Apr 28 00:04:41.333: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-809e436c-f7fa-4095-bd80-2f2c5ea5e23e" in namespace "projected-5180" to be "Succeeded or Failed" Apr 28 00:04:41.488: INFO: Pod "pod-projected-configmaps-809e436c-f7fa-4095-bd80-2f2c5ea5e23e": Phase="Pending", Reason="", readiness=false. Elapsed: 155.052485ms Apr 28 00:04:43.493: INFO: Pod "pod-projected-configmaps-809e436c-f7fa-4095-bd80-2f2c5ea5e23e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15980307s Apr 28 00:04:45.512: INFO: Pod "pod-projected-configmaps-809e436c-f7fa-4095-bd80-2f2c5ea5e23e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.178884329s STEP: Saw pod success Apr 28 00:04:45.512: INFO: Pod "pod-projected-configmaps-809e436c-f7fa-4095-bd80-2f2c5ea5e23e" satisfied condition "Succeeded or Failed" Apr 28 00:04:45.515: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-809e436c-f7fa-4095-bd80-2f2c5ea5e23e container projected-configmap-volume-test: STEP: delete the pod Apr 28 00:04:46.005: INFO: Waiting for pod pod-projected-configmaps-809e436c-f7fa-4095-bd80-2f2c5ea5e23e to disappear Apr 28 00:04:46.062: INFO: Pod pod-projected-configmaps-809e436c-f7fa-4095-bd80-2f2c5ea5e23e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:04:46.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5180" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":15,"skipped":197,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:04:46.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 28 00:04:46.183: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a3bed9b3-ab96-4748-bc3a-8484c79de21b" in namespace "downward-api-7559" to be "Succeeded or Failed" Apr 28 00:04:46.201: INFO: Pod "downwardapi-volume-a3bed9b3-ab96-4748-bc3a-8484c79de21b": Phase="Pending", Reason="", readiness=false. Elapsed: 17.403895ms Apr 28 00:04:48.292: INFO: Pod "downwardapi-volume-a3bed9b3-ab96-4748-bc3a-8484c79de21b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10817038s Apr 28 00:04:50.298: INFO: Pod "downwardapi-volume-a3bed9b3-ab96-4748-bc3a-8484c79de21b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.114822086s STEP: Saw pod success Apr 28 00:04:50.298: INFO: Pod "downwardapi-volume-a3bed9b3-ab96-4748-bc3a-8484c79de21b" satisfied condition "Succeeded or Failed" Apr 28 00:04:50.302: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-a3bed9b3-ab96-4748-bc3a-8484c79de21b container client-container: STEP: delete the pod Apr 28 00:04:50.418: INFO: Waiting for pod downwardapi-volume-a3bed9b3-ab96-4748-bc3a-8484c79de21b to disappear Apr 28 00:04:50.423: INFO: Pod downwardapi-volume-a3bed9b3-ab96-4748-bc3a-8484c79de21b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:04:50.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7559" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":16,"skipped":228,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:04:50.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288 STEP: creating an pod Apr 28 00:04:50.484: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-3914 -- logs-generator --log-lines-total 100 --run-duration 20s' Apr 28 00:04:50.590: INFO: stderr: "" Apr 28 00:04:50.590: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Waiting for log generator to start. Apr 28 00:04:50.590: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Apr 28 00:04:50.590: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-3914" to be "running and ready, or succeeded" Apr 28 00:04:50.680: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 90.035794ms Apr 28 00:04:52.687: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096586275s Apr 28 00:04:54.691: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.100776797s Apr 28 00:04:54.691: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Apr 28 00:04:54.691: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Apr 28 00:04:54.691: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3914' Apr 28 00:04:54.833: INFO: stderr: "" Apr 28 00:04:54.833: INFO: stdout: "I0428 00:04:52.955071 1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/xfq 357\nI0428 00:04:53.155206 1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/zdg7 440\nI0428 00:04:53.355253 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/5jqv 582\nI0428 00:04:53.555203 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/8khl 298\nI0428 00:04:53.755250 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/hxn 230\nI0428 00:04:53.955248 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/rx6 305\nI0428 00:04:54.155330 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/2c97 516\nI0428 00:04:54.355216 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/spxk 336\nI0428 00:04:54.555246 1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/44rd 399\nI0428 00:04:54.755203 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/k6kq 599\n" STEP: limiting log lines Apr 28 00:04:54.834: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3914 --tail=1' Apr 28 00:04:54.932: INFO: stderr: "" Apr 28 00:04:54.932: INFO: stdout: "I0428 00:04:54.755203 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/k6kq 599\n" Apr 28 00:04:54.932: INFO: got output "I0428 00:04:54.755203 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/k6kq 599\n" STEP: limiting log bytes Apr 28 00:04:54.932: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3914 --limit-bytes=1' Apr 28 00:04:55.053: INFO: stderr: "" Apr 28 00:04:55.053: INFO: stdout: "I" Apr 28 00:04:55.053: INFO: got output "I" STEP: exposing timestamps Apr 28 00:04:55.053: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3914 --tail=1 --timestamps' Apr 28 00:04:55.150: INFO: stderr: "" Apr 28 00:04:55.150: INFO: stdout: "2020-04-28T00:04:54.955390438Z I0428 00:04:54.955229 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/psk 327\n" Apr 28 00:04:55.150: INFO: got output "2020-04-28T00:04:54.955390438Z I0428 00:04:54.955229 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/psk 327\n" STEP: restricting to a time range Apr 28 00:04:57.650: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3914 --since=1s' Apr 28 00:04:57.753: INFO: stderr: "" Apr 28 00:04:57.753: INFO: stdout: "I0428 00:04:56.755223 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/kube-system/pods/p2m 524\nI0428 00:04:56.955239 1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/8th 487\nI0428 00:04:57.155240 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/6gf 502\nI0428 00:04:57.355210 1 logs_generator.go:76] 22 GET /api/v1/namespaces/kube-system/pods/fr9s 533\nI0428 00:04:57.555237 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/ns/pods/kjgh 578\n" Apr 28 00:04:57.753: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3914 --since=24h' Apr 28 00:04:57.851: INFO: stderr: "" Apr 28 00:04:57.851: INFO: stdout: "I0428 00:04:52.955071 1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/xfq 357\nI0428 00:04:53.155206 1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/zdg7 440\nI0428 00:04:53.355253 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/5jqv 582\nI0428 00:04:53.555203 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/8khl 298\nI0428 00:04:53.755250 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/hxn 230\nI0428 00:04:53.955248 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/rx6 305\nI0428 00:04:54.155330 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/2c97 516\nI0428 00:04:54.355216 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/spxk 336\nI0428 00:04:54.555246 1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/44rd 399\nI0428 00:04:54.755203 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/k6kq 599\nI0428 00:04:54.955229 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/psk 327\nI0428 00:04:55.155255 1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/h9zb 590\nI0428 00:04:55.355248 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/ns/pods/5tbk 242\nI0428 00:04:55.555199 1 logs_generator.go:76] 13 POST /api/v1/namespaces/default/pods/br8c 338\nI0428 00:04:55.755208 1 logs_generator.go:76] 14 GET /api/v1/namespaces/default/pods/v9l 285\nI0428 00:04:55.955228 1 logs_generator.go:76] 15 POST /api/v1/namespaces/kube-system/pods/k7kf 369\nI0428 00:04:56.155255 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/28tg 419\nI0428 00:04:56.355278 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/plrn 435\nI0428 00:04:56.555237 1 logs_generator.go:76] 18 POST /api/v1/namespaces/ns/pods/n79 536\nI0428 00:04:56.755223 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/kube-system/pods/p2m 524\nI0428 00:04:56.955239 1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/8th 487\nI0428 00:04:57.155240 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/6gf 502\nI0428 00:04:57.355210 1 logs_generator.go:76] 22 GET /api/v1/namespaces/kube-system/pods/fr9s 533\nI0428 00:04:57.555237 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/ns/pods/kjgh 578\nI0428 00:04:57.755252 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/kube-system/pods/6j4 315\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294 Apr 28 00:04:57.851: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-3914' Apr 28 00:05:02.988: INFO: stderr: "" Apr 28 00:05:02.988: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:05:02.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3914" for this suite. • [SLOW TEST:12.565 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":275,"completed":17,"skipped":236,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:05:02.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 28 00:05:06.125: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:05:06.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8481" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":18,"skipped":246,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:05:06.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-8174 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 28 00:05:06.367: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 28 00:05:06.404: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 28 00:05:08.572: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 28 00:05:10.416: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 28 00:05:12.409: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 28 00:05:14.409: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 28 00:05:16.408: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 28 00:05:18.408: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 28 00:05:20.408: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 28 00:05:20.414: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 28 00:05:22.418: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 28 00:05:24.418: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 28 00:05:26.418: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 28 00:05:28.418: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 28 00:05:32.440: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.16:8080/dial?request=hostname&protocol=http&host=10.244.2.15&port=8080&tries=1'] Namespace:pod-network-test-8174 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 00:05:32.440: INFO: >>> kubeConfig: /root/.kube/config I0428 00:05:32.471648 7 log.go:172] (0xc0020d4000) (0xc001f08820) Create stream I0428 00:05:32.471681 7 log.go:172] (0xc0020d4000) (0xc001f08820) Stream added, broadcasting: 1 I0428 00:05:32.475443 7 log.go:172] (0xc0020d4000) Reply frame received for 1 I0428 00:05:32.475483 7 log.go:172] (0xc0020d4000) (0xc001f088c0) Create stream I0428 00:05:32.475498 7 log.go:172] (0xc0020d4000) (0xc001f088c0) Stream added, broadcasting: 3 I0428 00:05:32.476714 7 log.go:172] (0xc0020d4000) Reply frame received for 3 I0428 00:05:32.476780 7 log.go:172] (0xc0020d4000) (0xc001dfabe0) Create stream I0428 00:05:32.476806 7 log.go:172] (0xc0020d4000) (0xc001dfabe0) Stream added, broadcasting: 5 I0428 00:05:32.478300 7 log.go:172] (0xc0020d4000) Reply frame received for 5 I0428 00:05:32.571521 7 log.go:172] (0xc0020d4000) Data frame received for 3 I0428 00:05:32.571576 7 log.go:172] (0xc001f088c0) (3) Data frame handling I0428 00:05:32.571616 7 log.go:172] (0xc001f088c0) (3) Data frame sent I0428 00:05:32.572113 7 log.go:172] (0xc0020d4000) Data frame received for 5 I0428 00:05:32.572152 7 log.go:172] (0xc001dfabe0) (5) Data frame handling I0428 00:05:32.572223 7 log.go:172] (0xc0020d4000) Data frame received for 3 I0428 00:05:32.572328 7 log.go:172] (0xc001f088c0) (3) Data frame handling I0428 00:05:32.574576 7 log.go:172] (0xc0020d4000) Data frame received for 1 I0428 00:05:32.574616 7 log.go:172] (0xc001f08820) (1) Data frame handling I0428 00:05:32.574670 7 log.go:172] (0xc001f08820) (1) Data frame sent I0428 00:05:32.574695 7 log.go:172] (0xc0020d4000) (0xc001f08820) Stream removed, broadcasting: 1 I0428 00:05:32.574716 7 log.go:172] (0xc0020d4000) Go away received I0428 00:05:32.575084 7 log.go:172] (0xc0020d4000) (0xc001f08820) Stream removed, broadcasting: 1 I0428 00:05:32.575110 7 log.go:172] (0xc0020d4000) (0xc001f088c0) Stream removed, broadcasting: 3 I0428 00:05:32.575122 7 log.go:172] (0xc0020d4000) (0xc001dfabe0) Stream removed, broadcasting: 5 Apr 28 00:05:32.575: INFO: Waiting for responses: map[] Apr 28 00:05:32.578: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.16:8080/dial?request=hostname&protocol=http&host=10.244.1.44&port=8080&tries=1'] Namespace:pod-network-test-8174 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 00:05:32.579: INFO: >>> kubeConfig: /root/.kube/config I0428 00:05:32.614703 7 log.go:172] (0xc002dd8580) (0xc001f88460) Create stream I0428 00:05:32.614736 7 log.go:172] (0xc002dd8580) (0xc001f88460) Stream added, broadcasting: 1 I0428 00:05:32.616516 7 log.go:172] (0xc002dd8580) Reply frame received for 1 I0428 00:05:32.616572 7 log.go:172] (0xc002dd8580) (0xc001dfac80) Create stream I0428 00:05:32.616583 7 log.go:172] (0xc002dd8580) (0xc001dfac80) Stream added, broadcasting: 3 I0428 00:05:32.617750 7 log.go:172] (0xc002dd8580) Reply frame received for 3 I0428 00:05:32.617776 7 log.go:172] (0xc002dd8580) (0xc001fb86e0) Create stream I0428 00:05:32.617786 7 log.go:172] (0xc002dd8580) (0xc001fb86e0) Stream added, broadcasting: 5 I0428 00:05:32.618771 7 log.go:172] (0xc002dd8580) Reply frame received for 5 I0428 00:05:32.686943 7 log.go:172] (0xc002dd8580) Data frame received for 3 I0428 00:05:32.686979 7 log.go:172] (0xc001dfac80) (3) Data frame handling I0428 00:05:32.687004 7 log.go:172] (0xc001dfac80) (3) Data frame sent I0428 00:05:32.687402 7 log.go:172] (0xc002dd8580) Data frame received for 3 I0428 00:05:32.687437 7 log.go:172] (0xc001dfac80) (3) Data frame handling I0428 00:05:32.687474 7 log.go:172] (0xc002dd8580) Data frame received for 5 I0428 00:05:32.687492 7 log.go:172] (0xc001fb86e0) (5) Data frame handling I0428 00:05:32.689442 7 log.go:172] (0xc002dd8580) Data frame received for 1 I0428 00:05:32.689471 7 log.go:172] (0xc001f88460) (1) Data frame handling I0428 00:05:32.689492 7 log.go:172] (0xc001f88460) (1) Data frame sent I0428 00:05:32.689517 7 log.go:172] (0xc002dd8580) (0xc001f88460) Stream removed, broadcasting: 1 I0428 00:05:32.689607 7 log.go:172] (0xc002dd8580) (0xc001f88460) Stream removed, broadcasting: 1 I0428 00:05:32.689625 7 log.go:172] (0xc002dd8580) (0xc001dfac80) Stream removed, broadcasting: 3 I0428 00:05:32.689865 7 log.go:172] (0xc002dd8580) Go away received I0428 00:05:32.690068 7 log.go:172] (0xc002dd8580) (0xc001fb86e0) Stream removed, broadcasting: 5 Apr 28 00:05:32.690: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:05:32.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8174" for this suite. • [SLOW TEST:26.544 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":19,"skipped":277,"failed":0} SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:05:32.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 28 00:05:32.748: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 28 00:05:32.774: INFO: Waiting for terminating namespaces to be deleted... Apr 28 00:05:32.777: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 28 00:05:32.782: INFO: netserver-0 from pod-network-test-8174 started at 2020-04-28 00:05:06 +0000 UTC (1 container statuses recorded) Apr 28 00:05:32.782: INFO: Container webserver ready: true, restart count 0 Apr 28 00:05:32.782: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 28 00:05:32.782: INFO: Container kindnet-cni ready: true, restart count 0 Apr 28 00:05:32.782: INFO: test-container-pod from pod-network-test-8174 started at 2020-04-28 00:05:28 +0000 UTC (1 container statuses recorded) Apr 28 00:05:32.782: INFO: Container webserver ready: true, restart count 0 Apr 28 00:05:32.782: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 28 00:05:32.782: INFO: Container kube-proxy ready: true, restart count 0 Apr 28 00:05:32.782: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 28 00:05:32.787: INFO: netserver-1 from pod-network-test-8174 started at 2020-04-28 00:05:06 +0000 UTC (1 container statuses recorded) Apr 28 00:05:32.787: INFO: Container webserver ready: true, restart count 0 Apr 28 00:05:32.787: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 28 00:05:32.787: INFO: Container kindnet-cni ready: true, restart count 0 Apr 28 00:05:32.787: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 28 00:05:32.787: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Apr 28 00:05:32.868: INFO: Pod kindnet-vnjgh requesting resource cpu=100m on Node latest-worker Apr 28 00:05:32.868: INFO: Pod kindnet-zq6gp requesting resource cpu=100m on Node latest-worker2 Apr 28 00:05:32.868: INFO: Pod kube-proxy-c5xlk requesting resource cpu=0m on Node latest-worker2 Apr 28 00:05:32.868: INFO: Pod kube-proxy-s9v6p requesting resource cpu=0m on Node latest-worker Apr 28 00:05:32.868: INFO: Pod netserver-0 requesting resource cpu=0m on Node latest-worker Apr 28 00:05:32.868: INFO: Pod netserver-1 requesting resource cpu=0m on Node latest-worker2 Apr 28 00:05:32.868: INFO: Pod test-container-pod requesting resource cpu=0m on Node latest-worker STEP: Starting Pods to consume most of the cluster CPU. Apr 28 00:05:32.868: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker Apr 28 00:05:32.875: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-a20d68f4-7a31-4551-8c15-a1bb69e28132.1609d2fe368fd5c1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-571/filler-pod-a20d68f4-7a31-4551-8c15-a1bb69e28132 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-a20d68f4-7a31-4551-8c15-a1bb69e28132.1609d2fe81b4904a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-a20d68f4-7a31-4551-8c15-a1bb69e28132.1609d2fec2596ca1], Reason = [Created], Message = [Created container filler-pod-a20d68f4-7a31-4551-8c15-a1bb69e28132] STEP: Considering event: Type = [Normal], Name = [filler-pod-a20d68f4-7a31-4551-8c15-a1bb69e28132.1609d2fedba59855], Reason = [Started], Message = [Started container filler-pod-a20d68f4-7a31-4551-8c15-a1bb69e28132] STEP: Considering event: Type = [Normal], Name = [filler-pod-dfa2df83-be4c-46f0-91be-126c05b0a978.1609d2fe35e3ddae], Reason = [Scheduled], Message = [Successfully assigned sched-pred-571/filler-pod-dfa2df83-be4c-46f0-91be-126c05b0a978 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-dfa2df83-be4c-46f0-91be-126c05b0a978.1609d2feb8ac03ab], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-dfa2df83-be4c-46f0-91be-126c05b0a978.1609d2feef7309ec], Reason = [Created], Message = [Created container filler-pod-dfa2df83-be4c-46f0-91be-126c05b0a978] STEP: Considering event: Type = [Normal], Name = [filler-pod-dfa2df83-be4c-46f0-91be-126c05b0a978.1609d2ff00a06722], Reason = [Started], Message = [Started container filler-pod-dfa2df83-be4c-46f0-91be-126c05b0a978] STEP: Considering event: Type = [Warning], Name = [additional-pod.1609d2ff9d0defbe], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:05:39.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-571" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:7.275 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":275,"completed":20,"skipped":285,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:05:39.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Apr 28 00:05:40.075: INFO: >>> kubeConfig: /root/.kube/config Apr 28 00:05:42.015: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:05:52.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6488" for this suite. • [SLOW TEST:12.692 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":21,"skipped":287,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:05:52.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service nodeport-service with the type=NodePort in namespace services-2716 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-2716 STEP: creating replication controller externalsvc in namespace services-2716 I0428 00:05:52.872719 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-2716, replica count: 2 I0428 00:05:55.923200 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0428 00:05:58.923451 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Apr 28 00:05:58.970: INFO: Creating new exec pod Apr 28 00:06:02.984: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-2716 execpodjxrtl -- /bin/sh -x -c nslookup nodeport-service' Apr 28 00:06:03.207: INFO: stderr: "I0428 00:06:03.103692 562 log.go:172] (0xc000a398c0) (0xc000a02a00) Create stream\nI0428 00:06:03.103748 562 log.go:172] (0xc000a398c0) (0xc000a02a00) Stream added, broadcasting: 1\nI0428 00:06:03.106972 562 log.go:172] (0xc000a398c0) Reply frame received for 1\nI0428 00:06:03.107038 562 log.go:172] (0xc000a398c0) (0xc000a02aa0) Create stream\nI0428 00:06:03.107064 562 log.go:172] (0xc000a398c0) (0xc000a02aa0) Stream added, broadcasting: 3\nI0428 00:06:03.107962 562 log.go:172] (0xc000a398c0) Reply frame received for 3\nI0428 00:06:03.108000 562 log.go:172] (0xc000a398c0) (0xc000ac6780) Create stream\nI0428 00:06:03.108014 562 log.go:172] (0xc000a398c0) (0xc000ac6780) Stream added, broadcasting: 5\nI0428 00:06:03.108889 562 log.go:172] (0xc000a398c0) Reply frame received for 5\nI0428 00:06:03.191666 562 log.go:172] (0xc000a398c0) Data frame received for 5\nI0428 00:06:03.191711 562 log.go:172] (0xc000ac6780) (5) Data frame handling\nI0428 00:06:03.191743 562 log.go:172] (0xc000ac6780) (5) Data frame sent\n+ nslookup nodeport-service\nI0428 00:06:03.198592 562 log.go:172] (0xc000a398c0) Data frame received for 3\nI0428 00:06:03.198622 562 log.go:172] (0xc000a02aa0) (3) Data frame handling\nI0428 00:06:03.198641 562 log.go:172] (0xc000a02aa0) (3) Data frame sent\nI0428 00:06:03.199757 562 log.go:172] (0xc000a398c0) Data frame received for 3\nI0428 00:06:03.199790 562 log.go:172] (0xc000a02aa0) (3) Data frame handling\nI0428 00:06:03.199823 562 log.go:172] (0xc000a02aa0) (3) Data frame sent\nI0428 00:06:03.200386 562 log.go:172] (0xc000a398c0) Data frame received for 3\nI0428 00:06:03.200412 562 log.go:172] (0xc000a398c0) Data frame received for 5\nI0428 00:06:03.200444 562 log.go:172] (0xc000ac6780) (5) Data frame handling\nI0428 00:06:03.200553 562 log.go:172] (0xc000a02aa0) (3) Data frame handling\nI0428 00:06:03.202075 562 log.go:172] (0xc000a398c0) Data frame received for 1\nI0428 00:06:03.202091 562 log.go:172] (0xc000a02a00) (1) Data frame handling\nI0428 00:06:03.202104 562 log.go:172] (0xc000a02a00) (1) Data frame sent\nI0428 00:06:03.202185 562 log.go:172] (0xc000a398c0) (0xc000a02a00) Stream removed, broadcasting: 1\nI0428 00:06:03.202544 562 log.go:172] (0xc000a398c0) (0xc000a02a00) Stream removed, broadcasting: 1\nI0428 00:06:03.202565 562 log.go:172] (0xc000a398c0) (0xc000a02aa0) Stream removed, broadcasting: 3\nI0428 00:06:03.202774 562 log.go:172] (0xc000a398c0) (0xc000ac6780) Stream removed, broadcasting: 5\n" Apr 28 00:06:03.208: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-2716.svc.cluster.local\tcanonical name = externalsvc.services-2716.svc.cluster.local.\nName:\texternalsvc.services-2716.svc.cluster.local\nAddress: 10.96.24.193\n\n" STEP: deleting ReplicationController externalsvc in namespace services-2716, will wait for the garbage collector to delete the pods Apr 28 00:06:03.268: INFO: Deleting ReplicationController externalsvc took: 6.519553ms Apr 28 00:06:03.568: INFO: Terminating ReplicationController externalsvc pods took: 300.230742ms Apr 28 00:06:13.097: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:06:13.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2716" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:20.458 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":22,"skipped":297,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:06:13.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Apr 28 00:06:18.278: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:06:19.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9765" for this suite. • [SLOW TEST:6.202 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":23,"skipped":307,"failed":0} S ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:06:19.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating replication controller my-hostname-basic-915cdd9c-d1c4-4997-b373-1eb77cf0742e Apr 28 00:06:19.633: INFO: Pod name my-hostname-basic-915cdd9c-d1c4-4997-b373-1eb77cf0742e: Found 0 pods out of 1 Apr 28 00:06:24.831: INFO: Pod name my-hostname-basic-915cdd9c-d1c4-4997-b373-1eb77cf0742e: Found 1 pods out of 1 Apr 28 00:06:24.831: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-915cdd9c-d1c4-4997-b373-1eb77cf0742e" are running Apr 28 00:06:24.834: INFO: Pod "my-hostname-basic-915cdd9c-d1c4-4997-b373-1eb77cf0742e-nmbkn" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-28 00:06:19 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-28 00:06:24 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-28 00:06:24 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-28 00:06:19 +0000 UTC Reason: Message:}]) Apr 28 00:06:24.834: INFO: Trying to dial the pod Apr 28 00:06:29.847: INFO: Controller my-hostname-basic-915cdd9c-d1c4-4997-b373-1eb77cf0742e: Got expected result from replica 1 [my-hostname-basic-915cdd9c-d1c4-4997-b373-1eb77cf0742e-nmbkn]: "my-hostname-basic-915cdd9c-d1c4-4997-b373-1eb77cf0742e-nmbkn", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:06:29.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4659" for this suite. • [SLOW TEST:10.515 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":24,"skipped":308,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:06:29.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 28 00:06:29.941: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8c6a9ca3-843a-4d7d-ae27-9f85280c0e76" in namespace "projected-7181" to be "Succeeded or Failed" Apr 28 00:06:29.946: INFO: Pod "downwardapi-volume-8c6a9ca3-843a-4d7d-ae27-9f85280c0e76": Phase="Pending", Reason="", readiness=false. Elapsed: 5.419996ms Apr 28 00:06:31.950: INFO: Pod "downwardapi-volume-8c6a9ca3-843a-4d7d-ae27-9f85280c0e76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009162093s Apr 28 00:06:33.954: INFO: Pod "downwardapi-volume-8c6a9ca3-843a-4d7d-ae27-9f85280c0e76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013579182s STEP: Saw pod success Apr 28 00:06:33.955: INFO: Pod "downwardapi-volume-8c6a9ca3-843a-4d7d-ae27-9f85280c0e76" satisfied condition "Succeeded or Failed" Apr 28 00:06:33.957: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-8c6a9ca3-843a-4d7d-ae27-9f85280c0e76 container client-container: STEP: delete the pod Apr 28 00:06:33.978: INFO: Waiting for pod downwardapi-volume-8c6a9ca3-843a-4d7d-ae27-9f85280c0e76 to disappear Apr 28 00:06:33.982: INFO: Pod downwardapi-volume-8c6a9ca3-843a-4d7d-ae27-9f85280c0e76 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:06:33.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7181" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":25,"skipped":340,"failed":0} SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:06:33.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-6199 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-6199 STEP: Creating statefulset with conflicting port in namespace statefulset-6199 STEP: Waiting until pod test-pod will start running in namespace statefulset-6199 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6199 Apr 28 00:06:38.133: INFO: Observed stateful pod in namespace: statefulset-6199, name: ss-0, uid: 0370df7a-883d-4d46-bcd2-628e47837ed1, status phase: Failed. Waiting for statefulset controller to delete. Apr 28 00:06:38.143: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6199 STEP: Removing pod with conflicting port in namespace statefulset-6199 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6199 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 28 00:06:42.226: INFO: Deleting all statefulset in ns statefulset-6199 Apr 28 00:06:42.229: INFO: Scaling statefulset ss to 0 Apr 28 00:07:02.243: INFO: Waiting for statefulset status.replicas updated to 0 Apr 28 00:07:02.246: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:07:02.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6199" for this suite. • [SLOW TEST:28.278 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":26,"skipped":345,"failed":0} SSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:07:02.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 28 00:07:05.364: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:07:05.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8449" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":27,"skipped":348,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:07:05.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating all guestbook components Apr 28 00:07:05.441: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Apr 28 00:07:05.441: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1966' Apr 28 00:07:05.871: INFO: stderr: "" Apr 28 00:07:05.871: INFO: stdout: "service/agnhost-slave created\n" Apr 28 00:07:05.871: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Apr 28 00:07:05.871: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1966' Apr 28 00:07:06.152: INFO: stderr: "" Apr 28 00:07:06.152: INFO: stdout: "service/agnhost-master created\n" Apr 28 00:07:06.153: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 28 00:07:06.153: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1966' Apr 28 00:07:06.494: INFO: stderr: "" Apr 28 00:07:06.494: INFO: stdout: "service/frontend created\n" Apr 28 00:07:06.494: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Apr 28 00:07:06.494: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1966' Apr 28 00:07:06.774: INFO: stderr: "" Apr 28 00:07:06.774: INFO: stdout: "deployment.apps/frontend created\n" Apr 28 00:07:06.774: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 28 00:07:06.774: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1966' Apr 28 00:07:07.192: INFO: stderr: "" Apr 28 00:07:07.192: INFO: stdout: "deployment.apps/agnhost-master created\n" Apr 28 00:07:07.192: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 28 00:07:07.192: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1966' Apr 28 00:07:07.486: INFO: stderr: "" Apr 28 00:07:07.486: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Apr 28 00:07:07.486: INFO: Waiting for all frontend pods to be Running. Apr 28 00:07:17.536: INFO: Waiting for frontend to serve content. Apr 28 00:07:17.569: INFO: Trying to add a new entry to the guestbook. Apr 28 00:07:17.611: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Apr 28 00:07:17.619: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1966' Apr 28 00:07:17.785: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 28 00:07:17.785: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Apr 28 00:07:17.785: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1966' Apr 28 00:07:17.911: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 28 00:07:17.911: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 28 00:07:17.911: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1966' Apr 28 00:07:18.033: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 28 00:07:18.033: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 28 00:07:18.034: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1966' Apr 28 00:07:18.150: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 28 00:07:18.150: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 28 00:07:18.151: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1966' Apr 28 00:07:18.256: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 28 00:07:18.256: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 28 00:07:18.256: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1966' Apr 28 00:07:18.370: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 28 00:07:18.370: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:07:18.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1966" for this suite. • [SLOW TEST:12.976 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":275,"completed":28,"skipped":358,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:07:18.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 28 00:07:23.230: INFO: Successfully updated pod "annotationupdate42542147-2456-425b-85b2-a922e1bad7c8" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:07:25.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1321" for this suite. • [SLOW TEST:6.875 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":29,"skipped":374,"failed":0} SS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:07:25.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 28 00:07:29.860: INFO: Successfully updated pod "pod-update-activedeadlineseconds-24ca71c5-f2f6-4a74-86d7-fbba6540d885" Apr 28 00:07:29.860: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-24ca71c5-f2f6-4a74-86d7-fbba6540d885" in namespace "pods-5680" to be "terminated due to deadline exceeded" Apr 28 00:07:29.885: INFO: Pod "pod-update-activedeadlineseconds-24ca71c5-f2f6-4a74-86d7-fbba6540d885": Phase="Running", Reason="", readiness=true. Elapsed: 24.525462ms Apr 28 00:07:31.889: INFO: Pod "pod-update-activedeadlineseconds-24ca71c5-f2f6-4a74-86d7-fbba6540d885": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.028666686s Apr 28 00:07:31.889: INFO: Pod "pod-update-activedeadlineseconds-24ca71c5-f2f6-4a74-86d7-fbba6540d885" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:07:31.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5680" for this suite. • [SLOW TEST:6.645 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":30,"skipped":376,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:07:31.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:07:32.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3445" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":31,"skipped":383,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:07:32.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 28 00:07:36.688: INFO: Successfully updated pod "annotationupdate8f9c1c3c-594a-43d5-87a4-c7308085a440" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:07:38.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7232" for this suite. • [SLOW TEST:6.694 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":32,"skipped":403,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:07:38.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 28 00:07:38.821: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:07:45.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-44" for this suite. • [SLOW TEST:6.717 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":33,"skipped":423,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:07:45.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:07:50.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8734" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":34,"skipped":453,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:07:50.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-2q9p STEP: Creating a pod to test atomic-volume-subpath Apr 28 00:07:50.137: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-2q9p" in namespace "subpath-9558" to be "Succeeded or Failed" Apr 28 00:07:50.145: INFO: Pod "pod-subpath-test-configmap-2q9p": Phase="Pending", Reason="", readiness=false. Elapsed: 8.421551ms Apr 28 00:07:52.185: INFO: Pod "pod-subpath-test-configmap-2q9p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048306623s Apr 28 00:07:54.189: INFO: Pod "pod-subpath-test-configmap-2q9p": Phase="Running", Reason="", readiness=true. Elapsed: 4.052457544s Apr 28 00:07:56.194: INFO: Pod "pod-subpath-test-configmap-2q9p": Phase="Running", Reason="", readiness=true. Elapsed: 6.056573514s Apr 28 00:07:58.198: INFO: Pod "pod-subpath-test-configmap-2q9p": Phase="Running", Reason="", readiness=true. Elapsed: 8.060975957s Apr 28 00:08:00.201: INFO: Pod "pod-subpath-test-configmap-2q9p": Phase="Running", Reason="", readiness=true. Elapsed: 10.064361606s Apr 28 00:08:02.205: INFO: Pod "pod-subpath-test-configmap-2q9p": Phase="Running", Reason="", readiness=true. Elapsed: 12.068015861s Apr 28 00:08:04.208: INFO: Pod "pod-subpath-test-configmap-2q9p": Phase="Running", Reason="", readiness=true. Elapsed: 14.071015184s Apr 28 00:08:06.213: INFO: Pod "pod-subpath-test-configmap-2q9p": Phase="Running", Reason="", readiness=true. Elapsed: 16.075925968s Apr 28 00:08:08.217: INFO: Pod "pod-subpath-test-configmap-2q9p": Phase="Running", Reason="", readiness=true. Elapsed: 18.079798945s Apr 28 00:08:10.221: INFO: Pod "pod-subpath-test-configmap-2q9p": Phase="Running", Reason="", readiness=true. Elapsed: 20.084367144s Apr 28 00:08:12.226: INFO: Pod "pod-subpath-test-configmap-2q9p": Phase="Running", Reason="", readiness=true. Elapsed: 22.088544393s Apr 28 00:08:14.230: INFO: Pod "pod-subpath-test-configmap-2q9p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.0929516s STEP: Saw pod success Apr 28 00:08:14.230: INFO: Pod "pod-subpath-test-configmap-2q9p" satisfied condition "Succeeded or Failed" Apr 28 00:08:14.233: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-2q9p container test-container-subpath-configmap-2q9p: STEP: delete the pod Apr 28 00:08:14.251: INFO: Waiting for pod pod-subpath-test-configmap-2q9p to disappear Apr 28 00:08:14.255: INFO: Pod pod-subpath-test-configmap-2q9p no longer exists STEP: Deleting pod pod-subpath-test-configmap-2q9p Apr 28 00:08:14.255: INFO: Deleting pod "pod-subpath-test-configmap-2q9p" in namespace "subpath-9558" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:08:14.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9558" for this suite. • [SLOW TEST:24.258 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":35,"skipped":486,"failed":0} S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:08:14.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-3428 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 28 00:08:14.387: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 28 00:08:14.473: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 28 00:08:16.558: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 28 00:08:18.480: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 28 00:08:20.478: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 28 00:08:22.479: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 28 00:08:24.478: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 28 00:08:26.479: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 28 00:08:28.477: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 28 00:08:30.477: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 28 00:08:32.478: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 28 00:08:34.477: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 28 00:08:34.483: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 28 00:08:38.512: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.28:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3428 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 00:08:38.513: INFO: >>> kubeConfig: /root/.kube/config I0428 00:08:38.550368 7 log.go:172] (0xc0020d4210) (0xc001fb92c0) Create stream I0428 00:08:38.550401 7 log.go:172] (0xc0020d4210) (0xc001fb92c0) Stream added, broadcasting: 1 I0428 00:08:38.552105 7 log.go:172] (0xc0020d4210) Reply frame received for 1 I0428 00:08:38.552164 7 log.go:172] (0xc0020d4210) (0xc00111d7c0) Create stream I0428 00:08:38.552182 7 log.go:172] (0xc0020d4210) (0xc00111d7c0) Stream added, broadcasting: 3 I0428 00:08:38.553454 7 log.go:172] (0xc0020d4210) Reply frame received for 3 I0428 00:08:38.553495 7 log.go:172] (0xc0020d4210) (0xc001d8b360) Create stream I0428 00:08:38.553506 7 log.go:172] (0xc0020d4210) (0xc001d8b360) Stream added, broadcasting: 5 I0428 00:08:38.554708 7 log.go:172] (0xc0020d4210) Reply frame received for 5 I0428 00:08:38.632196 7 log.go:172] (0xc0020d4210) Data frame received for 3 I0428 00:08:38.632227 7 log.go:172] (0xc00111d7c0) (3) Data frame handling I0428 00:08:38.632245 7 log.go:172] (0xc00111d7c0) (3) Data frame sent I0428 00:08:38.632434 7 log.go:172] (0xc0020d4210) Data frame received for 5 I0428 00:08:38.632449 7 log.go:172] (0xc001d8b360) (5) Data frame handling I0428 00:08:38.632486 7 log.go:172] (0xc0020d4210) Data frame received for 3 I0428 00:08:38.632521 7 log.go:172] (0xc00111d7c0) (3) Data frame handling I0428 00:08:38.634379 7 log.go:172] (0xc0020d4210) Data frame received for 1 I0428 00:08:38.634396 7 log.go:172] (0xc001fb92c0) (1) Data frame handling I0428 00:08:38.634404 7 log.go:172] (0xc001fb92c0) (1) Data frame sent I0428 00:08:38.634413 7 log.go:172] (0xc0020d4210) (0xc001fb92c0) Stream removed, broadcasting: 1 I0428 00:08:38.634450 7 log.go:172] (0xc0020d4210) Go away received I0428 00:08:38.634494 7 log.go:172] (0xc0020d4210) (0xc001fb92c0) Stream removed, broadcasting: 1 I0428 00:08:38.634512 7 log.go:172] (0xc0020d4210) (0xc00111d7c0) Stream removed, broadcasting: 3 I0428 00:08:38.634524 7 log.go:172] (0xc0020d4210) (0xc001d8b360) Stream removed, broadcasting: 5 Apr 28 00:08:38.634: INFO: Found all expected endpoints: [netserver-0] Apr 28 00:08:38.638: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.58:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3428 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 00:08:38.638: INFO: >>> kubeConfig: /root/.kube/config I0428 00:08:38.669885 7 log.go:172] (0xc0023dd550) (0xc001d8b900) Create stream I0428 00:08:38.669959 7 log.go:172] (0xc0023dd550) (0xc001d8b900) Stream added, broadcasting: 1 I0428 00:08:38.672099 7 log.go:172] (0xc0023dd550) Reply frame received for 1 I0428 00:08:38.672187 7 log.go:172] (0xc0023dd550) (0xc001d8b9a0) Create stream I0428 00:08:38.672215 7 log.go:172] (0xc0023dd550) (0xc001d8b9a0) Stream added, broadcasting: 3 I0428 00:08:38.673238 7 log.go:172] (0xc0023dd550) Reply frame received for 3 I0428 00:08:38.673274 7 log.go:172] (0xc0023dd550) (0xc001fb9400) Create stream I0428 00:08:38.673286 7 log.go:172] (0xc0023dd550) (0xc001fb9400) Stream added, broadcasting: 5 I0428 00:08:38.674115 7 log.go:172] (0xc0023dd550) Reply frame received for 5 I0428 00:08:38.748472 7 log.go:172] (0xc0023dd550) Data frame received for 5 I0428 00:08:38.748514 7 log.go:172] (0xc001fb9400) (5) Data frame handling I0428 00:08:38.748547 7 log.go:172] (0xc0023dd550) Data frame received for 3 I0428 00:08:38.748561 7 log.go:172] (0xc001d8b9a0) (3) Data frame handling I0428 00:08:38.748577 7 log.go:172] (0xc001d8b9a0) (3) Data frame sent I0428 00:08:38.748592 7 log.go:172] (0xc0023dd550) Data frame received for 3 I0428 00:08:38.748605 7 log.go:172] (0xc001d8b9a0) (3) Data frame handling I0428 00:08:38.750863 7 log.go:172] (0xc0023dd550) Data frame received for 1 I0428 00:08:38.750906 7 log.go:172] (0xc001d8b900) (1) Data frame handling I0428 00:08:38.750932 7 log.go:172] (0xc001d8b900) (1) Data frame sent I0428 00:08:38.750951 7 log.go:172] (0xc0023dd550) (0xc001d8b900) Stream removed, broadcasting: 1 I0428 00:08:38.751009 7 log.go:172] (0xc0023dd550) Go away received I0428 00:08:38.751210 7 log.go:172] (0xc0023dd550) (0xc001d8b900) Stream removed, broadcasting: 1 I0428 00:08:38.751236 7 log.go:172] (0xc0023dd550) (0xc001d8b9a0) Stream removed, broadcasting: 3 I0428 00:08:38.751251 7 log.go:172] (0xc0023dd550) (0xc001fb9400) Stream removed, broadcasting: 5 Apr 28 00:08:38.751: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:08:38.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3428" for this suite. • [SLOW TEST:24.492 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":36,"skipped":487,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:08:38.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 28 00:08:38.862: INFO: Waiting up to 5m0s for pod "downwardapi-volume-26c887a8-49d7-4cbf-aea7-71dfd7dcd0a9" in namespace "downward-api-3106" to be "Succeeded or Failed" Apr 28 00:08:38.886: INFO: Pod "downwardapi-volume-26c887a8-49d7-4cbf-aea7-71dfd7dcd0a9": Phase="Pending", Reason="", readiness=false. Elapsed: 24.185119ms Apr 28 00:08:40.890: INFO: Pod "downwardapi-volume-26c887a8-49d7-4cbf-aea7-71dfd7dcd0a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027331223s Apr 28 00:08:42.893: INFO: Pod "downwardapi-volume-26c887a8-49d7-4cbf-aea7-71dfd7dcd0a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031200058s STEP: Saw pod success Apr 28 00:08:42.893: INFO: Pod "downwardapi-volume-26c887a8-49d7-4cbf-aea7-71dfd7dcd0a9" satisfied condition "Succeeded or Failed" Apr 28 00:08:42.896: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-26c887a8-49d7-4cbf-aea7-71dfd7dcd0a9 container client-container: STEP: delete the pod Apr 28 00:08:42.953: INFO: Waiting for pod downwardapi-volume-26c887a8-49d7-4cbf-aea7-71dfd7dcd0a9 to disappear Apr 28 00:08:42.956: INFO: Pod downwardapi-volume-26c887a8-49d7-4cbf-aea7-71dfd7dcd0a9 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:08:42.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3106" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":37,"skipped":503,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:08:42.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-secret-chg7 STEP: Creating a pod to test atomic-volume-subpath Apr 28 00:08:43.068: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-chg7" in namespace "subpath-5315" to be "Succeeded or Failed" Apr 28 00:08:43.071: INFO: Pod "pod-subpath-test-secret-chg7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.821061ms Apr 28 00:08:45.132: INFO: Pod "pod-subpath-test-secret-chg7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063984783s Apr 28 00:08:47.136: INFO: Pod "pod-subpath-test-secret-chg7": Phase="Running", Reason="", readiness=true. Elapsed: 4.067954087s Apr 28 00:08:49.140: INFO: Pod "pod-subpath-test-secret-chg7": Phase="Running", Reason="", readiness=true. Elapsed: 6.071970133s Apr 28 00:08:51.145: INFO: Pod "pod-subpath-test-secret-chg7": Phase="Running", Reason="", readiness=true. Elapsed: 8.076475553s Apr 28 00:08:53.149: INFO: Pod "pod-subpath-test-secret-chg7": Phase="Running", Reason="", readiness=true. Elapsed: 10.080977233s Apr 28 00:08:55.153: INFO: Pod "pod-subpath-test-secret-chg7": Phase="Running", Reason="", readiness=true. Elapsed: 12.085073001s Apr 28 00:08:57.157: INFO: Pod "pod-subpath-test-secret-chg7": Phase="Running", Reason="", readiness=true. Elapsed: 14.089317441s Apr 28 00:08:59.162: INFO: Pod "pod-subpath-test-secret-chg7": Phase="Running", Reason="", readiness=true. Elapsed: 16.093426806s Apr 28 00:09:01.166: INFO: Pod "pod-subpath-test-secret-chg7": Phase="Running", Reason="", readiness=true. Elapsed: 18.097723083s Apr 28 00:09:03.170: INFO: Pod "pod-subpath-test-secret-chg7": Phase="Running", Reason="", readiness=true. Elapsed: 20.102151846s Apr 28 00:09:05.174: INFO: Pod "pod-subpath-test-secret-chg7": Phase="Running", Reason="", readiness=true. Elapsed: 22.105860504s Apr 28 00:09:07.178: INFO: Pod "pod-subpath-test-secret-chg7": Phase="Running", Reason="", readiness=true. Elapsed: 24.109755689s Apr 28 00:09:09.181: INFO: Pod "pod-subpath-test-secret-chg7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.113162514s STEP: Saw pod success Apr 28 00:09:09.181: INFO: Pod "pod-subpath-test-secret-chg7" satisfied condition "Succeeded or Failed" Apr 28 00:09:09.183: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-secret-chg7 container test-container-subpath-secret-chg7: STEP: delete the pod Apr 28 00:09:09.266: INFO: Waiting for pod pod-subpath-test-secret-chg7 to disappear Apr 28 00:09:09.314: INFO: Pod pod-subpath-test-secret-chg7 no longer exists STEP: Deleting pod pod-subpath-test-secret-chg7 Apr 28 00:09:09.314: INFO: Deleting pod "pod-subpath-test-secret-chg7" in namespace "subpath-5315" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:09:09.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5315" for this suite. • [SLOW TEST:26.365 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":38,"skipped":515,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:09:09.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-3401/configmap-test-1a50247e-4d7f-452e-9b29-76d61d48c75f STEP: Creating a pod to test consume configMaps Apr 28 00:09:09.406: INFO: Waiting up to 5m0s for pod "pod-configmaps-b52394f6-78e1-4ee1-99ca-41f5f8479351" in namespace "configmap-3401" to be "Succeeded or Failed" Apr 28 00:09:09.417: INFO: Pod "pod-configmaps-b52394f6-78e1-4ee1-99ca-41f5f8479351": Phase="Pending", Reason="", readiness=false. Elapsed: 10.883825ms Apr 28 00:09:11.421: INFO: Pod "pod-configmaps-b52394f6-78e1-4ee1-99ca-41f5f8479351": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014649615s Apr 28 00:09:13.425: INFO: Pod "pod-configmaps-b52394f6-78e1-4ee1-99ca-41f5f8479351": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019043079s STEP: Saw pod success Apr 28 00:09:13.425: INFO: Pod "pod-configmaps-b52394f6-78e1-4ee1-99ca-41f5f8479351" satisfied condition "Succeeded or Failed" Apr 28 00:09:13.428: INFO: Trying to get logs from node latest-worker pod pod-configmaps-b52394f6-78e1-4ee1-99ca-41f5f8479351 container env-test: STEP: delete the pod Apr 28 00:09:13.450: INFO: Waiting for pod pod-configmaps-b52394f6-78e1-4ee1-99ca-41f5f8479351 to disappear Apr 28 00:09:13.454: INFO: Pod pod-configmaps-b52394f6-78e1-4ee1-99ca-41f5f8479351 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:09:13.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3401" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":39,"skipped":544,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:09:13.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 28 00:09:18.087: INFO: Successfully updated pod "pod-update-29d23480-4624-4be7-9259-1d80b9f07a7c" STEP: verifying the updated pod is in kubernetes Apr 28 00:09:18.104: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:09:18.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9386" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":40,"skipped":627,"failed":0} SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:09:18.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-downwardapi-5ndd STEP: Creating a pod to test atomic-volume-subpath Apr 28 00:09:18.307: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-5ndd" in namespace "subpath-7961" to be "Succeeded or Failed" Apr 28 00:09:18.311: INFO: Pod "pod-subpath-test-downwardapi-5ndd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.59348ms Apr 28 00:09:20.315: INFO: Pod "pod-subpath-test-downwardapi-5ndd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007456009s Apr 28 00:09:22.319: INFO: Pod "pod-subpath-test-downwardapi-5ndd": Phase="Running", Reason="", readiness=true. Elapsed: 4.011789799s Apr 28 00:09:24.323: INFO: Pod "pod-subpath-test-downwardapi-5ndd": Phase="Running", Reason="", readiness=true. Elapsed: 6.016005851s Apr 28 00:09:26.327: INFO: Pod "pod-subpath-test-downwardapi-5ndd": Phase="Running", Reason="", readiness=true. Elapsed: 8.020073519s Apr 28 00:09:28.332: INFO: Pod "pod-subpath-test-downwardapi-5ndd": Phase="Running", Reason="", readiness=true. Elapsed: 10.024523881s Apr 28 00:09:30.336: INFO: Pod "pod-subpath-test-downwardapi-5ndd": Phase="Running", Reason="", readiness=true. Elapsed: 12.028836881s Apr 28 00:09:32.341: INFO: Pod "pod-subpath-test-downwardapi-5ndd": Phase="Running", Reason="", readiness=true. Elapsed: 14.03378957s Apr 28 00:09:34.345: INFO: Pod "pod-subpath-test-downwardapi-5ndd": Phase="Running", Reason="", readiness=true. Elapsed: 16.038026217s Apr 28 00:09:36.349: INFO: Pod "pod-subpath-test-downwardapi-5ndd": Phase="Running", Reason="", readiness=true. Elapsed: 18.042083838s Apr 28 00:09:38.353: INFO: Pod "pod-subpath-test-downwardapi-5ndd": Phase="Running", Reason="", readiness=true. Elapsed: 20.046290029s Apr 28 00:09:40.357: INFO: Pod "pod-subpath-test-downwardapi-5ndd": Phase="Running", Reason="", readiness=true. Elapsed: 22.050161782s Apr 28 00:09:42.361: INFO: Pod "pod-subpath-test-downwardapi-5ndd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.053894433s STEP: Saw pod success Apr 28 00:09:42.361: INFO: Pod "pod-subpath-test-downwardapi-5ndd" satisfied condition "Succeeded or Failed" Apr 28 00:09:42.363: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-downwardapi-5ndd container test-container-subpath-downwardapi-5ndd: STEP: delete the pod Apr 28 00:09:42.387: INFO: Waiting for pod pod-subpath-test-downwardapi-5ndd to disappear Apr 28 00:09:42.409: INFO: Pod pod-subpath-test-downwardapi-5ndd no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-5ndd Apr 28 00:09:42.409: INFO: Deleting pod "pod-subpath-test-downwardapi-5ndd" in namespace "subpath-7961" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:09:42.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7961" for this suite. • [SLOW TEST:24.307 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":41,"skipped":634,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:09:42.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 00:09:42.516: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:09:43.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7796" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":275,"completed":42,"skipped":652,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:09:43.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:09:43.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4486" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":43,"skipped":689,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:09:43.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-33c5cd4e-c639-499d-896a-1498eeead045 STEP: Creating a pod to test consume secrets Apr 28 00:09:43.864: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ec52aa01-7918-40c2-85fa-6a82d09ccda8" in namespace "projected-8975" to be "Succeeded or Failed" Apr 28 00:09:43.868: INFO: Pod "pod-projected-secrets-ec52aa01-7918-40c2-85fa-6a82d09ccda8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.988193ms Apr 28 00:09:45.871: INFO: Pod "pod-projected-secrets-ec52aa01-7918-40c2-85fa-6a82d09ccda8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007449626s Apr 28 00:09:47.875: INFO: Pod "pod-projected-secrets-ec52aa01-7918-40c2-85fa-6a82d09ccda8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011663752s STEP: Saw pod success Apr 28 00:09:47.876: INFO: Pod "pod-projected-secrets-ec52aa01-7918-40c2-85fa-6a82d09ccda8" satisfied condition "Succeeded or Failed" Apr 28 00:09:47.879: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-ec52aa01-7918-40c2-85fa-6a82d09ccda8 container projected-secret-volume-test: STEP: delete the pod Apr 28 00:09:47.927: INFO: Waiting for pod pod-projected-secrets-ec52aa01-7918-40c2-85fa-6a82d09ccda8 to disappear Apr 28 00:09:47.940: INFO: Pod pod-projected-secrets-ec52aa01-7918-40c2-85fa-6a82d09ccda8 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:09:47.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8975" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":44,"skipped":701,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:09:47.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 00:09:48.045: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 28 00:09:50.974: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9697 create -f -' Apr 28 00:09:54.104: INFO: stderr: "" Apr 28 00:09:54.104: INFO: stdout: "e2e-test-crd-publish-openapi-2147-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 28 00:09:54.104: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9697 delete e2e-test-crd-publish-openapi-2147-crds test-cr' Apr 28 00:09:54.207: INFO: stderr: "" Apr 28 00:09:54.208: INFO: stdout: "e2e-test-crd-publish-openapi-2147-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Apr 28 00:09:54.208: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9697 apply -f -' Apr 28 00:09:54.464: INFO: stderr: "" Apr 28 00:09:54.464: INFO: stdout: "e2e-test-crd-publish-openapi-2147-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 28 00:09:54.464: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9697 delete e2e-test-crd-publish-openapi-2147-crds test-cr' Apr 28 00:09:54.580: INFO: stderr: "" Apr 28 00:09:54.580: INFO: stdout: "e2e-test-crd-publish-openapi-2147-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Apr 28 00:09:54.581: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2147-crds' Apr 28 00:09:54.834: INFO: stderr: "" Apr 28 00:09:54.834: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2147-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:09:56.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9697" for this suite. • [SLOW TEST:8.819 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":45,"skipped":752,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:09:56.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on node default medium Apr 28 00:09:56.831: INFO: Waiting up to 5m0s for pod "pod-98d78b49-9894-4113-8169-72e342b9eddf" in namespace "emptydir-1255" to be "Succeeded or Failed" Apr 28 00:09:56.847: INFO: Pod "pod-98d78b49-9894-4113-8169-72e342b9eddf": Phase="Pending", Reason="", readiness=false. Elapsed: 15.580496ms Apr 28 00:09:58.851: INFO: Pod "pod-98d78b49-9894-4113-8169-72e342b9eddf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01997282s Apr 28 00:10:00.856: INFO: Pod "pod-98d78b49-9894-4113-8169-72e342b9eddf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024532115s STEP: Saw pod success Apr 28 00:10:00.856: INFO: Pod "pod-98d78b49-9894-4113-8169-72e342b9eddf" satisfied condition "Succeeded or Failed" Apr 28 00:10:00.859: INFO: Trying to get logs from node latest-worker pod pod-98d78b49-9894-4113-8169-72e342b9eddf container test-container: STEP: delete the pod Apr 28 00:10:00.896: INFO: Waiting for pod pod-98d78b49-9894-4113-8169-72e342b9eddf to disappear Apr 28 00:10:00.907: INFO: Pod pod-98d78b49-9894-4113-8169-72e342b9eddf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:10:00.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1255" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":46,"skipped":768,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:10:00.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 28 00:10:01.458: INFO: Pod name wrapped-volume-race-cfe0f7ac-dc19-40fb-86e4-873fd93258a4: Found 0 pods out of 5 Apr 28 00:10:06.475: INFO: Pod name wrapped-volume-race-cfe0f7ac-dc19-40fb-86e4-873fd93258a4: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-cfe0f7ac-dc19-40fb-86e4-873fd93258a4 in namespace emptydir-wrapper-3189, will wait for the garbage collector to delete the pods Apr 28 00:10:20.568: INFO: Deleting ReplicationController wrapped-volume-race-cfe0f7ac-dc19-40fb-86e4-873fd93258a4 took: 18.834271ms Apr 28 00:10:20.868: INFO: Terminating ReplicationController wrapped-volume-race-cfe0f7ac-dc19-40fb-86e4-873fd93258a4 pods took: 300.279645ms STEP: Creating RC which spawns configmap-volume pods Apr 28 00:10:33.001: INFO: Pod name wrapped-volume-race-bd7f2d17-bc40-4a7f-9fd2-bddbbfb874dc: Found 0 pods out of 5 Apr 28 00:10:38.008: INFO: Pod name wrapped-volume-race-bd7f2d17-bc40-4a7f-9fd2-bddbbfb874dc: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-bd7f2d17-bc40-4a7f-9fd2-bddbbfb874dc in namespace emptydir-wrapper-3189, will wait for the garbage collector to delete the pods Apr 28 00:10:52.246: INFO: Deleting ReplicationController wrapped-volume-race-bd7f2d17-bc40-4a7f-9fd2-bddbbfb874dc took: 5.125983ms Apr 28 00:10:52.646: INFO: Terminating ReplicationController wrapped-volume-race-bd7f2d17-bc40-4a7f-9fd2-bddbbfb874dc pods took: 400.312237ms STEP: Creating RC which spawns configmap-volume pods Apr 28 00:11:03.495: INFO: Pod name wrapped-volume-race-00040daf-5e5a-4550-9b01-9c01edd639fa: Found 0 pods out of 5 Apr 28 00:11:08.503: INFO: Pod name wrapped-volume-race-00040daf-5e5a-4550-9b01-9c01edd639fa: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-00040daf-5e5a-4550-9b01-9c01edd639fa in namespace emptydir-wrapper-3189, will wait for the garbage collector to delete the pods Apr 28 00:11:22.617: INFO: Deleting ReplicationController wrapped-volume-race-00040daf-5e5a-4550-9b01-9c01edd639fa took: 7.768992ms Apr 28 00:11:22.917: INFO: Terminating ReplicationController wrapped-volume-race-00040daf-5e5a-4550-9b01-9c01edd639fa pods took: 300.289634ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:11:34.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3189" for this suite. • [SLOW TEST:93.413 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":47,"skipped":781,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:11:34.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-5256 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 28 00:11:34.365: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 28 00:11:34.432: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 28 00:11:36.436: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 28 00:11:38.436: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 28 00:11:40.445: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 28 00:11:42.436: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 28 00:11:44.436: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 28 00:11:46.436: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 28 00:11:48.436: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 28 00:11:50.436: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 28 00:11:52.436: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 28 00:11:54.436: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 28 00:11:56.436: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 28 00:11:56.442: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 28 00:12:00.463: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.63:8080/dial?request=hostname&protocol=udp&host=10.244.2.49&port=8081&tries=1'] Namespace:pod-network-test-5256 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 00:12:00.463: INFO: >>> kubeConfig: /root/.kube/config I0428 00:12:00.501519 7 log.go:172] (0xc0020d4160) (0xc001f083c0) Create stream I0428 00:12:00.501546 7 log.go:172] (0xc0020d4160) (0xc001f083c0) Stream added, broadcasting: 1 I0428 00:12:00.503486 7 log.go:172] (0xc0020d4160) Reply frame received for 1 I0428 00:12:00.503541 7 log.go:172] (0xc0020d4160) (0xc001f08460) Create stream I0428 00:12:00.503565 7 log.go:172] (0xc0020d4160) (0xc001f08460) Stream added, broadcasting: 3 I0428 00:12:00.504656 7 log.go:172] (0xc0020d4160) Reply frame received for 3 I0428 00:12:00.504688 7 log.go:172] (0xc0020d4160) (0xc001f08640) Create stream I0428 00:12:00.504699 7 log.go:172] (0xc0020d4160) (0xc001f08640) Stream added, broadcasting: 5 I0428 00:12:00.506213 7 log.go:172] (0xc0020d4160) Reply frame received for 5 I0428 00:12:00.592331 7 log.go:172] (0xc0020d4160) Data frame received for 3 I0428 00:12:00.592382 7 log.go:172] (0xc001f08460) (3) Data frame handling I0428 00:12:00.592415 7 log.go:172] (0xc001f08460) (3) Data frame sent I0428 00:12:00.592602 7 log.go:172] (0xc0020d4160) Data frame received for 3 I0428 00:12:00.592635 7 log.go:172] (0xc001f08460) (3) Data frame handling I0428 00:12:00.592748 7 log.go:172] (0xc0020d4160) Data frame received for 5 I0428 00:12:00.592775 7 log.go:172] (0xc001f08640) (5) Data frame handling I0428 00:12:00.594442 7 log.go:172] (0xc0020d4160) Data frame received for 1 I0428 00:12:00.594487 7 log.go:172] (0xc001f083c0) (1) Data frame handling I0428 00:12:00.594505 7 log.go:172] (0xc001f083c0) (1) Data frame sent I0428 00:12:00.594521 7 log.go:172] (0xc0020d4160) (0xc001f083c0) Stream removed, broadcasting: 1 I0428 00:12:00.594541 7 log.go:172] (0xc0020d4160) Go away received I0428 00:12:00.594684 7 log.go:172] (0xc0020d4160) (0xc001f083c0) Stream removed, broadcasting: 1 I0428 00:12:00.594716 7 log.go:172] (0xc0020d4160) (0xc001f08460) Stream removed, broadcasting: 3 I0428 00:12:00.594751 7 log.go:172] (0xc0020d4160) (0xc001f08640) Stream removed, broadcasting: 5 Apr 28 00:12:00.594: INFO: Waiting for responses: map[] Apr 28 00:12:00.598: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.63:8080/dial?request=hostname&protocol=udp&host=10.244.1.62&port=8081&tries=1'] Namespace:pod-network-test-5256 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 00:12:00.598: INFO: >>> kubeConfig: /root/.kube/config I0428 00:12:00.629542 7 log.go:172] (0xc0020d44d0) (0xc001f08820) Create stream I0428 00:12:00.629570 7 log.go:172] (0xc0020d44d0) (0xc001f08820) Stream added, broadcasting: 1 I0428 00:12:00.631724 7 log.go:172] (0xc0020d44d0) Reply frame received for 1 I0428 00:12:00.631761 7 log.go:172] (0xc0020d44d0) (0xc001d8a6e0) Create stream I0428 00:12:00.631774 7 log.go:172] (0xc0020d44d0) (0xc001d8a6e0) Stream added, broadcasting: 3 I0428 00:12:00.632877 7 log.go:172] (0xc0020d44d0) Reply frame received for 3 I0428 00:12:00.632917 7 log.go:172] (0xc0020d44d0) (0xc001d8a780) Create stream I0428 00:12:00.632933 7 log.go:172] (0xc0020d44d0) (0xc001d8a780) Stream added, broadcasting: 5 I0428 00:12:00.634140 7 log.go:172] (0xc0020d44d0) Reply frame received for 5 I0428 00:12:00.715772 7 log.go:172] (0xc0020d44d0) Data frame received for 3 I0428 00:12:00.715804 7 log.go:172] (0xc001d8a6e0) (3) Data frame handling I0428 00:12:00.715823 7 log.go:172] (0xc001d8a6e0) (3) Data frame sent I0428 00:12:00.716679 7 log.go:172] (0xc0020d44d0) Data frame received for 3 I0428 00:12:00.716724 7 log.go:172] (0xc001d8a6e0) (3) Data frame handling I0428 00:12:00.716755 7 log.go:172] (0xc0020d44d0) Data frame received for 5 I0428 00:12:00.716767 7 log.go:172] (0xc001d8a780) (5) Data frame handling I0428 00:12:00.718797 7 log.go:172] (0xc0020d44d0) Data frame received for 1 I0428 00:12:00.718825 7 log.go:172] (0xc001f08820) (1) Data frame handling I0428 00:12:00.718840 7 log.go:172] (0xc001f08820) (1) Data frame sent I0428 00:12:00.718852 7 log.go:172] (0xc0020d44d0) (0xc001f08820) Stream removed, broadcasting: 1 I0428 00:12:00.718953 7 log.go:172] (0xc0020d44d0) (0xc001f08820) Stream removed, broadcasting: 1 I0428 00:12:00.719024 7 log.go:172] (0xc0020d44d0) (0xc001d8a6e0) Stream removed, broadcasting: 3 I0428 00:12:00.719052 7 log.go:172] (0xc0020d44d0) (0xc001d8a780) Stream removed, broadcasting: 5 Apr 28 00:12:00.719: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:12:00.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0428 00:12:00.719454 7 log.go:172] (0xc0020d44d0) Go away received STEP: Destroying namespace "pod-network-test-5256" for this suite. • [SLOW TEST:26.397 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":48,"skipped":794,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:12:00.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-44e52bfb-c269-4450-afac-99f7c1d5894e STEP: Creating a pod to test consume configMaps Apr 28 00:12:00.808: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a8e913a8-1a3c-4358-ac8b-aea065171fdc" in namespace "projected-7801" to be "Succeeded or Failed" Apr 28 00:12:00.848: INFO: Pod "pod-projected-configmaps-a8e913a8-1a3c-4358-ac8b-aea065171fdc": Phase="Pending", Reason="", readiness=false. Elapsed: 39.506005ms Apr 28 00:12:02.853: INFO: Pod "pod-projected-configmaps-a8e913a8-1a3c-4358-ac8b-aea065171fdc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044268549s Apr 28 00:12:04.857: INFO: Pod "pod-projected-configmaps-a8e913a8-1a3c-4358-ac8b-aea065171fdc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048289044s STEP: Saw pod success Apr 28 00:12:04.857: INFO: Pod "pod-projected-configmaps-a8e913a8-1a3c-4358-ac8b-aea065171fdc" satisfied condition "Succeeded or Failed" Apr 28 00:12:04.860: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-a8e913a8-1a3c-4358-ac8b-aea065171fdc container projected-configmap-volume-test: STEP: delete the pod Apr 28 00:12:04.903: INFO: Waiting for pod pod-projected-configmaps-a8e913a8-1a3c-4358-ac8b-aea065171fdc to disappear Apr 28 00:12:04.917: INFO: Pod pod-projected-configmaps-a8e913a8-1a3c-4358-ac8b-aea065171fdc no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:12:04.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7801" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":49,"skipped":798,"failed":0} SSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:12:04.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's args Apr 28 00:12:05.049: INFO: Waiting up to 5m0s for pod "var-expansion-b874f7e6-3c83-4f00-8b75-620e71578692" in namespace "var-expansion-4237" to be "Succeeded or Failed" Apr 28 00:12:05.055: INFO: Pod "var-expansion-b874f7e6-3c83-4f00-8b75-620e71578692": Phase="Pending", Reason="", readiness=false. Elapsed: 5.404985ms Apr 28 00:12:07.082: INFO: Pod "var-expansion-b874f7e6-3c83-4f00-8b75-620e71578692": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032432772s Apr 28 00:12:09.085: INFO: Pod "var-expansion-b874f7e6-3c83-4f00-8b75-620e71578692": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035765106s Apr 28 00:12:11.089: INFO: Pod "var-expansion-b874f7e6-3c83-4f00-8b75-620e71578692": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.039839777s STEP: Saw pod success Apr 28 00:12:11.089: INFO: Pod "var-expansion-b874f7e6-3c83-4f00-8b75-620e71578692" satisfied condition "Succeeded or Failed" Apr 28 00:12:11.092: INFO: Trying to get logs from node latest-worker pod var-expansion-b874f7e6-3c83-4f00-8b75-620e71578692 container dapi-container: STEP: delete the pod Apr 28 00:12:11.128: INFO: Waiting for pod var-expansion-b874f7e6-3c83-4f00-8b75-620e71578692 to disappear Apr 28 00:12:11.154: INFO: Pod var-expansion-b874f7e6-3c83-4f00-8b75-620e71578692 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:12:11.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4237" for this suite. • [SLOW TEST:6.241 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":50,"skipped":805,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:12:11.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 28 00:12:11.656: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 28 00:12:13.890: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723629531, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723629531, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723629531, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723629531, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 28 00:12:16.924: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 00:12:16.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:12:18.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8528" for this suite. STEP: Destroying namespace "webhook-8528-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.981 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":51,"skipped":816,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:12:18.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 28 00:12:18.219: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c681e580-a3f7-4924-b52a-7dc33238dbe9" in namespace "projected-8274" to be "Succeeded or Failed" Apr 28 00:12:18.234: INFO: Pod "downwardapi-volume-c681e580-a3f7-4924-b52a-7dc33238dbe9": Phase="Pending", Reason="", readiness=false. Elapsed: 15.186685ms Apr 28 00:12:20.249: INFO: Pod "downwardapi-volume-c681e580-a3f7-4924-b52a-7dc33238dbe9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030534158s Apr 28 00:12:22.261: INFO: Pod "downwardapi-volume-c681e580-a3f7-4924-b52a-7dc33238dbe9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042309367s STEP: Saw pod success Apr 28 00:12:22.261: INFO: Pod "downwardapi-volume-c681e580-a3f7-4924-b52a-7dc33238dbe9" satisfied condition "Succeeded or Failed" Apr 28 00:12:22.265: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-c681e580-a3f7-4924-b52a-7dc33238dbe9 container client-container: STEP: delete the pod Apr 28 00:12:22.308: INFO: Waiting for pod downwardapi-volume-c681e580-a3f7-4924-b52a-7dc33238dbe9 to disappear Apr 28 00:12:22.319: INFO: Pod downwardapi-volume-c681e580-a3f7-4924-b52a-7dc33238dbe9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:12:22.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8274" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":52,"skipped":831,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:12:22.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0428 00:12:23.474356 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 28 00:12:23.474: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:12:23.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1009" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":53,"skipped":874,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:12:23.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 28 00:12:31.600: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 28 00:12:31.623: INFO: Pod pod-with-poststart-exec-hook still exists Apr 28 00:12:33.623: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 28 00:12:33.628: INFO: Pod pod-with-poststart-exec-hook still exists Apr 28 00:12:35.623: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 28 00:12:35.627: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:12:35.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2542" for this suite. • [SLOW TEST:12.152 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":54,"skipped":918,"failed":0} [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:12:35.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 00:12:35.700: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-518d9849-f9cc-4045-be27-4b6efb33dbbb" in namespace "security-context-test-8748" to be "Succeeded or Failed" Apr 28 00:12:35.703: INFO: Pod "busybox-readonly-false-518d9849-f9cc-4045-be27-4b6efb33dbbb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.191226ms Apr 28 00:12:37.709: INFO: Pod "busybox-readonly-false-518d9849-f9cc-4045-be27-4b6efb33dbbb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009759249s Apr 28 00:12:39.714: INFO: Pod "busybox-readonly-false-518d9849-f9cc-4045-be27-4b6efb33dbbb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01420358s Apr 28 00:12:39.714: INFO: Pod "busybox-readonly-false-518d9849-f9cc-4045-be27-4b6efb33dbbb" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:12:39.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8748" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":55,"skipped":918,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:12:39.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:12:46.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7040" for this suite. • [SLOW TEST:7.115 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":56,"skipped":945,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:12:46.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 28 00:12:46.887: INFO: Waiting up to 5m0s for pod "pod-f8db07bb-c0b6-46e0-b37d-bb6ddc8b161c" in namespace "emptydir-5713" to be "Succeeded or Failed" Apr 28 00:12:46.891: INFO: Pod "pod-f8db07bb-c0b6-46e0-b37d-bb6ddc8b161c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.734177ms Apr 28 00:12:48.895: INFO: Pod "pod-f8db07bb-c0b6-46e0-b37d-bb6ddc8b161c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00735503s Apr 28 00:12:50.899: INFO: Pod "pod-f8db07bb-c0b6-46e0-b37d-bb6ddc8b161c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011863977s STEP: Saw pod success Apr 28 00:12:50.899: INFO: Pod "pod-f8db07bb-c0b6-46e0-b37d-bb6ddc8b161c" satisfied condition "Succeeded or Failed" Apr 28 00:12:50.903: INFO: Trying to get logs from node latest-worker2 pod pod-f8db07bb-c0b6-46e0-b37d-bb6ddc8b161c container test-container: STEP: delete the pod Apr 28 00:12:50.926: INFO: Waiting for pod pod-f8db07bb-c0b6-46e0-b37d-bb6ddc8b161c to disappear Apr 28 00:12:50.998: INFO: Pod pod-f8db07bb-c0b6-46e0-b37d-bb6ddc8b161c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:12:50.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5713" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":57,"skipped":974,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:12:51.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 28 00:12:51.920: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 28 00:12:53.934: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723629571, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723629571, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723629571, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723629571, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 28 00:12:56.961: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Apr 28 00:13:01.099: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config attach --namespace=webhook-9890 to-be-attached-pod -i -c=container1' Apr 28 00:13:01.213: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:13:01.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9890" for this suite. STEP: Destroying namespace "webhook-9890-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.311 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":58,"skipped":996,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:13:01.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod test-webserver-498a6b36-1c34-4bdc-9b66-492654e364b4 in namespace container-probe-6294 Apr 28 00:13:05.439: INFO: Started pod test-webserver-498a6b36-1c34-4bdc-9b66-492654e364b4 in namespace container-probe-6294 STEP: checking the pod's current state and verifying that restartCount is present Apr 28 00:13:05.442: INFO: Initial restart count of pod test-webserver-498a6b36-1c34-4bdc-9b66-492654e364b4 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:17:06.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6294" for this suite. • [SLOW TEST:244.823 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":59,"skipped":1021,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:17:06.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:17:20.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9657" for this suite. • [SLOW TEST:14.066 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":60,"skipped":1049,"failed":0} [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:17:20.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Apr 28 00:17:20.340: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Apr 28 00:17:20.344: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Apr 28 00:17:20.344: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Apr 28 00:17:20.350: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Apr 28 00:17:20.350: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Apr 28 00:17:20.409: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Apr 28 00:17:20.409: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Apr 28 00:17:27.546: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:17:27.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-7960" for this suite. • [SLOW TEST:7.393 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":61,"skipped":1049,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:17:27.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 28 00:17:28.254: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 28 00:17:30.262: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723629848, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723629848, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723629848, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723629848, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 28 00:17:32.266: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723629848, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723629848, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723629848, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723629848, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 28 00:17:35.292: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:17:35.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8451" for this suite. STEP: Destroying namespace "webhook-8451-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.936 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":62,"skipped":1091,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:17:35.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-05ac6233-64a2-43f5-b783-ae8b69a9d500 in namespace container-probe-8905 Apr 28 00:17:39.638: INFO: Started pod busybox-05ac6233-64a2-43f5-b783-ae8b69a9d500 in namespace container-probe-8905 STEP: checking the pod's current state and verifying that restartCount is present Apr 28 00:17:39.641: INFO: Initial restart count of pod busybox-05ac6233-64a2-43f5-b783-ae8b69a9d500 is 0 Apr 28 00:18:29.749: INFO: Restart count of pod container-probe-8905/busybox-05ac6233-64a2-43f5-b783-ae8b69a9d500 is now 1 (50.108054723s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:18:29.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8905" for this suite. • [SLOW TEST:54.257 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":63,"skipped":1101,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:18:29.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 28 00:18:33.906: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:18:34.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7717" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":64,"skipped":1107,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:18:34.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating api versions Apr 28 00:18:34.068: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config api-versions' Apr 28 00:18:34.287: INFO: stderr: "" Apr 28 00:18:34.287: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:18:34.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9202" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":275,"completed":65,"skipped":1129,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:18:34.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 28 00:18:34.389: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-5451' Apr 28 00:18:34.504: INFO: stderr: "" Apr 28 00:18:34.504: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Apr 28 00:18:39.555: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-5451 -o json' Apr 28 00:18:39.656: INFO: stderr: "" Apr 28 00:18:39.656: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-04-28T00:18:34Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-5451\",\n \"resourceVersion\": \"11581981\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-5451/pods/e2e-test-httpd-pod\",\n \"uid\": \"c5320be5-39aa-4ebb-88e5-7950152f73f3\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-c8dw9\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-c8dw9\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-c8dw9\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-28T00:18:34Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-28T00:18:37Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-28T00:18:37Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-28T00:18:34Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://c91109796fd36e8838d8bcbeb37ca0b81a5e34a1e5ad45aa21f0488184d21607\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-04-28T00:18:36Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.13\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.64\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.64\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-04-28T00:18:34Z\"\n }\n}\n" STEP: replace the image in the pod Apr 28 00:18:39.656: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-5451' Apr 28 00:18:39.983: INFO: stderr: "" Apr 28 00:18:39.983: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Apr 28 00:18:40.020: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5451' Apr 28 00:18:43.118: INFO: stderr: "" Apr 28 00:18:43.118: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:18:43.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5451" for this suite. • [SLOW TEST:8.829 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":275,"completed":66,"skipped":1173,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:18:43.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-1e8c6dde-89b6-4b04-91cb-7e57509c54dc STEP: Creating a pod to test consume configMaps Apr 28 00:18:43.210: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bef0922f-a5f6-46d1-aa4e-b35f3e9d2ad2" in namespace "projected-6935" to be "Succeeded or Failed" Apr 28 00:18:43.228: INFO: Pod "pod-projected-configmaps-bef0922f-a5f6-46d1-aa4e-b35f3e9d2ad2": Phase="Pending", Reason="", readiness=false. Elapsed: 18.189217ms Apr 28 00:18:45.232: INFO: Pod "pod-projected-configmaps-bef0922f-a5f6-46d1-aa4e-b35f3e9d2ad2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022049985s Apr 28 00:18:47.236: INFO: Pod "pod-projected-configmaps-bef0922f-a5f6-46d1-aa4e-b35f3e9d2ad2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026100623s STEP: Saw pod success Apr 28 00:18:47.236: INFO: Pod "pod-projected-configmaps-bef0922f-a5f6-46d1-aa4e-b35f3e9d2ad2" satisfied condition "Succeeded or Failed" Apr 28 00:18:47.239: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-bef0922f-a5f6-46d1-aa4e-b35f3e9d2ad2 container projected-configmap-volume-test: STEP: delete the pod Apr 28 00:18:47.381: INFO: Waiting for pod pod-projected-configmaps-bef0922f-a5f6-46d1-aa4e-b35f3e9d2ad2 to disappear Apr 28 00:18:47.418: INFO: Pod pod-projected-configmaps-bef0922f-a5f6-46d1-aa4e-b35f3e9d2ad2 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:18:47.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6935" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":67,"skipped":1183,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:18:47.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 28 00:18:47.595: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5286' Apr 28 00:18:47.713: INFO: stderr: "" Apr 28 00:18:47.713: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423 Apr 28 00:18:47.738: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5286' Apr 28 00:18:52.990: INFO: stderr: "" Apr 28 00:18:52.990: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:18:52.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5286" for this suite. • [SLOW TEST:5.571 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1414 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":275,"completed":68,"skipped":1187,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:18:52.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-2670001f-5e1f-4851-a946-39b6a9b9fbc7 in namespace container-probe-3723 Apr 28 00:18:57.075: INFO: Started pod busybox-2670001f-5e1f-4851-a946-39b6a9b9fbc7 in namespace container-probe-3723 STEP: checking the pod's current state and verifying that restartCount is present Apr 28 00:18:57.079: INFO: Initial restart count of pod busybox-2670001f-5e1f-4851-a946-39b6a9b9fbc7 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:22:57.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3723" for this suite. • [SLOW TEST:244.700 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":69,"skipped":1204,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:22:57.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 28 00:23:05.800: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 28 00:23:05.806: INFO: Pod pod-with-poststart-http-hook still exists Apr 28 00:23:07.806: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 28 00:23:07.827: INFO: Pod pod-with-poststart-http-hook still exists Apr 28 00:23:09.806: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 28 00:23:09.811: INFO: Pod pod-with-poststart-http-hook still exists Apr 28 00:23:11.806: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 28 00:23:11.811: INFO: Pod pod-with-poststart-http-hook still exists Apr 28 00:23:13.806: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 28 00:23:13.821: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:23:13.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4731" for this suite. • [SLOW TEST:16.130 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":70,"skipped":1212,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:23:13.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 00:23:13.891: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-1d236da4-d1e5-4341-b7ee-82879df36fd3" in namespace "security-context-test-80" to be "Succeeded or Failed" Apr 28 00:23:13.894: INFO: Pod "busybox-privileged-false-1d236da4-d1e5-4341-b7ee-82879df36fd3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.691604ms Apr 28 00:23:15.907: INFO: Pod "busybox-privileged-false-1d236da4-d1e5-4341-b7ee-82879df36fd3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016032806s Apr 28 00:23:17.911: INFO: Pod "busybox-privileged-false-1d236da4-d1e5-4341-b7ee-82879df36fd3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019832784s Apr 28 00:23:17.911: INFO: Pod "busybox-privileged-false-1d236da4-d1e5-4341-b7ee-82879df36fd3" satisfied condition "Succeeded or Failed" Apr 28 00:23:17.916: INFO: Got logs for pod "busybox-privileged-false-1d236da4-d1e5-4341-b7ee-82879df36fd3": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:23:17.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-80" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":71,"skipped":1224,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:23:17.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-ce23852b-57e4-4bc5-a70d-2ecda14e7196 STEP: Creating a pod to test consume secrets Apr 28 00:23:18.034: INFO: Waiting up to 5m0s for pod "pod-secrets-fe243394-5724-4d1f-8d0b-770d758a19a8" in namespace "secrets-545" to be "Succeeded or Failed" Apr 28 00:23:18.051: INFO: Pod "pod-secrets-fe243394-5724-4d1f-8d0b-770d758a19a8": Phase="Pending", Reason="", readiness=false. Elapsed: 16.514374ms Apr 28 00:23:20.056: INFO: Pod "pod-secrets-fe243394-5724-4d1f-8d0b-770d758a19a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021715705s Apr 28 00:23:22.060: INFO: Pod "pod-secrets-fe243394-5724-4d1f-8d0b-770d758a19a8": Phase="Running", Reason="", readiness=true. Elapsed: 4.025631139s Apr 28 00:23:24.064: INFO: Pod "pod-secrets-fe243394-5724-4d1f-8d0b-770d758a19a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029767057s STEP: Saw pod success Apr 28 00:23:24.064: INFO: Pod "pod-secrets-fe243394-5724-4d1f-8d0b-770d758a19a8" satisfied condition "Succeeded or Failed" Apr 28 00:23:24.067: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-fe243394-5724-4d1f-8d0b-770d758a19a8 container secret-volume-test: STEP: delete the pod Apr 28 00:23:24.109: INFO: Waiting for pod pod-secrets-fe243394-5724-4d1f-8d0b-770d758a19a8 to disappear Apr 28 00:23:24.124: INFO: Pod pod-secrets-fe243394-5724-4d1f-8d0b-770d758a19a8 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:23:24.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-545" for this suite. • [SLOW TEST:6.222 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":72,"skipped":1249,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:23:24.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Apr 28 00:23:24.193: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:23:40.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-572" for this suite. • [SLOW TEST:16.014 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":73,"skipped":1250,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:23:40.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 28 00:23:40.239: INFO: Waiting up to 5m0s for pod "downward-api-e55bbfba-65fa-46cd-86ef-65f406a97a25" in namespace "downward-api-2698" to be "Succeeded or Failed" Apr 28 00:23:40.255: INFO: Pod "downward-api-e55bbfba-65fa-46cd-86ef-65f406a97a25": Phase="Pending", Reason="", readiness=false. Elapsed: 16.67695ms Apr 28 00:23:42.259: INFO: Pod "downward-api-e55bbfba-65fa-46cd-86ef-65f406a97a25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02065921s Apr 28 00:23:44.263: INFO: Pod "downward-api-e55bbfba-65fa-46cd-86ef-65f406a97a25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024038494s STEP: Saw pod success Apr 28 00:23:44.263: INFO: Pod "downward-api-e55bbfba-65fa-46cd-86ef-65f406a97a25" satisfied condition "Succeeded or Failed" Apr 28 00:23:44.265: INFO: Trying to get logs from node latest-worker2 pod downward-api-e55bbfba-65fa-46cd-86ef-65f406a97a25 container dapi-container: STEP: delete the pod Apr 28 00:23:44.326: INFO: Waiting for pod downward-api-e55bbfba-65fa-46cd-86ef-65f406a97a25 to disappear Apr 28 00:23:44.332: INFO: Pod downward-api-e55bbfba-65fa-46cd-86ef-65f406a97a25 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:23:44.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2698" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":74,"skipped":1277,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:23:44.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 28 00:23:44.401: INFO: Waiting up to 5m0s for pod "pod-d41428b7-d4d6-4bba-af94-da5a50b1bb68" in namespace "emptydir-8410" to be "Succeeded or Failed" Apr 28 00:23:44.404: INFO: Pod "pod-d41428b7-d4d6-4bba-af94-da5a50b1bb68": Phase="Pending", Reason="", readiness=false. Elapsed: 3.030887ms Apr 28 00:23:46.408: INFO: Pod "pod-d41428b7-d4d6-4bba-af94-da5a50b1bb68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007057203s Apr 28 00:23:48.412: INFO: Pod "pod-d41428b7-d4d6-4bba-af94-da5a50b1bb68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010998571s STEP: Saw pod success Apr 28 00:23:48.412: INFO: Pod "pod-d41428b7-d4d6-4bba-af94-da5a50b1bb68" satisfied condition "Succeeded or Failed" Apr 28 00:23:48.415: INFO: Trying to get logs from node latest-worker2 pod pod-d41428b7-d4d6-4bba-af94-da5a50b1bb68 container test-container: STEP: delete the pod Apr 28 00:23:48.444: INFO: Waiting for pod pod-d41428b7-d4d6-4bba-af94-da5a50b1bb68 to disappear Apr 28 00:23:48.458: INFO: Pod pod-d41428b7-d4d6-4bba-af94-da5a50b1bb68 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:23:48.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8410" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":75,"skipped":1283,"failed":0} SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:23:48.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-7303 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Apr 28 00:23:48.584: INFO: Found 0 stateful pods, waiting for 3 Apr 28 00:23:58.589: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 28 00:23:58.589: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 28 00:23:58.589: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Apr 28 00:24:08.588: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 28 00:24:08.588: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 28 00:24:08.588: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 28 00:24:08.614: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 28 00:24:18.652: INFO: Updating stateful set ss2 Apr 28 00:24:18.674: INFO: Waiting for Pod statefulset-7303/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Apr 28 00:24:28.810: INFO: Found 2 stateful pods, waiting for 3 Apr 28 00:24:38.814: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 28 00:24:38.814: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 28 00:24:38.814: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 28 00:24:38.838: INFO: Updating stateful set ss2 Apr 28 00:24:39.139: INFO: Waiting for Pod statefulset-7303/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 28 00:24:49.148: INFO: Waiting for Pod statefulset-7303/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 28 00:24:59.164: INFO: Updating stateful set ss2 Apr 28 00:24:59.203: INFO: Waiting for StatefulSet statefulset-7303/ss2 to complete update Apr 28 00:24:59.203: INFO: Waiting for Pod statefulset-7303/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 28 00:25:09.218: INFO: Deleting all statefulset in ns statefulset-7303 Apr 28 00:25:09.221: INFO: Scaling statefulset ss2 to 0 Apr 28 00:25:19.238: INFO: Waiting for statefulset status.replicas updated to 0 Apr 28 00:25:19.241: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:25:19.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7303" for this suite. • [SLOW TEST:90.794 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":76,"skipped":1290,"failed":0} [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:25:19.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 28 00:25:19.312: INFO: Waiting up to 5m0s for pod "pod-098a1b1b-8f2c-4f63-9766-92c778e41e0c" in namespace "emptydir-2055" to be "Succeeded or Failed" Apr 28 00:25:19.344: INFO: Pod "pod-098a1b1b-8f2c-4f63-9766-92c778e41e0c": Phase="Pending", Reason="", readiness=false. Elapsed: 31.85358ms Apr 28 00:25:21.355: INFO: Pod "pod-098a1b1b-8f2c-4f63-9766-92c778e41e0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042987286s Apr 28 00:25:23.359: INFO: Pod "pod-098a1b1b-8f2c-4f63-9766-92c778e41e0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046935899s STEP: Saw pod success Apr 28 00:25:23.359: INFO: Pod "pod-098a1b1b-8f2c-4f63-9766-92c778e41e0c" satisfied condition "Succeeded or Failed" Apr 28 00:25:23.362: INFO: Trying to get logs from node latest-worker2 pod pod-098a1b1b-8f2c-4f63-9766-92c778e41e0c container test-container: STEP: delete the pod Apr 28 00:25:23.396: INFO: Waiting for pod pod-098a1b1b-8f2c-4f63-9766-92c778e41e0c to disappear Apr 28 00:25:23.400: INFO: Pod pod-098a1b1b-8f2c-4f63-9766-92c778e41e0c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:25:23.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2055" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":77,"skipped":1290,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:25:23.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 28 00:25:31.606: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 28 00:25:31.610: INFO: Pod pod-with-prestop-exec-hook still exists Apr 28 00:25:33.610: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 28 00:25:33.613: INFO: Pod pod-with-prestop-exec-hook still exists Apr 28 00:25:35.610: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 28 00:25:35.613: INFO: Pod pod-with-prestop-exec-hook still exists Apr 28 00:25:37.610: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 28 00:25:37.613: INFO: Pod pod-with-prestop-exec-hook still exists Apr 28 00:25:39.610: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 28 00:25:39.650: INFO: Pod pod-with-prestop-exec-hook still exists Apr 28 00:25:41.610: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 28 00:25:41.613: INFO: Pod pod-with-prestop-exec-hook still exists Apr 28 00:25:43.610: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 28 00:25:43.614: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:25:43.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7374" for this suite. • [SLOW TEST:20.235 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":78,"skipped":1312,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:25:43.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:25:47.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8531" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":79,"skipped":1382,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:25:47.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:26:02.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8319" for this suite. STEP: Destroying namespace "nsdeletetest-2286" for this suite. Apr 28 00:26:02.972: INFO: Namespace nsdeletetest-2286 was already deleted STEP: Destroying namespace "nsdeletetest-3288" for this suite. • [SLOW TEST:15.230 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":80,"skipped":1411,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:26:02.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0428 00:26:14.663495 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 28 00:26:14.663: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:26:14.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-671" for this suite. • [SLOW TEST:11.696 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":81,"skipped":1423,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:26:14.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 00:26:14.730: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 28 00:26:14.781: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 28 00:26:19.793: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 28 00:26:19.793: INFO: Creating deployment "test-rolling-update-deployment" Apr 28 00:26:19.802: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 28 00:26:19.814: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 28 00:26:21.821: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 28 00:26:21.824: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630379, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630379, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630379, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630379, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-664dd8fc7f\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 28 00:26:23.902: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630379, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630379, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630379, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630379, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-664dd8fc7f\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 28 00:26:25.827: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 28 00:26:25.838: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-9089 /apis/apps/v1/namespaces/deployment-9089/deployments/test-rolling-update-deployment 22877237-cb0b-4f8a-9cf2-1d3a92f3fe9e 11584256 1 2020-04-28 00:26:19 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003910db8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-28 00:26:19 +0000 UTC,LastTransitionTime:2020-04-28 00:26:19 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-664dd8fc7f" has successfully progressed.,LastUpdateTime:2020-04-28 00:26:24 +0000 UTC,LastTransitionTime:2020-04-28 00:26:19 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 28 00:26:25.840: INFO: New ReplicaSet "test-rolling-update-deployment-664dd8fc7f" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f deployment-9089 /apis/apps/v1/namespaces/deployment-9089/replicasets/test-rolling-update-deployment-664dd8fc7f 89c2b3aa-ecf2-49a1-98e2-f842fcf9ac3c 11584245 1 2020-04-28 00:26:19 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 22877237-cb0b-4f8a-9cf2-1d3a92f3fe9e 0xc0039548a7 0xc0039548a8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 664dd8fc7f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003954918 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 28 00:26:25.840: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 28 00:26:25.841: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-9089 /apis/apps/v1/namespaces/deployment-9089/replicasets/test-rolling-update-controller 4a2c0acd-0214-4512-be4f-cf09f2765496 11584254 2 2020-04-28 00:26:14 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 22877237-cb0b-4f8a-9cf2-1d3a92f3fe9e 0xc0039547d7 0xc0039547d8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003954838 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 28 00:26:25.843: INFO: Pod "test-rolling-update-deployment-664dd8fc7f-t9gz6" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f-t9gz6 test-rolling-update-deployment-664dd8fc7f- deployment-9089 /api/v1/namespaces/deployment-9089/pods/test-rolling-update-deployment-664dd8fc7f-t9gz6 bc324f59-7bea-4549-b924-1d4e1b2d42b9 11584244 0 2020-04-28 00:26:19 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-664dd8fc7f 89c2b3aa-ecf2-49a1-98e2-f842fcf9ac3c 0xc003954de7 0xc003954de8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vcrt9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vcrt9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vcrt9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 00:26:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 00:26:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 00:26:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 00:26:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.93,StartTime:2020-04-28 00:26:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-28 00:26:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://868f91cfedaf65178c38f98df9a42db22becbb3cde8047ff2f6beac0dacc75a2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.93,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:26:25.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9089" for this suite. • [SLOW TEST:11.181 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":82,"skipped":1433,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:26:25.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 28 00:26:26.443: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 28 00:26:28.454: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630386, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630386, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630386, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630386, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 28 00:26:31.605: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 00:26:31.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1202-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:26:32.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6565" for this suite. STEP: Destroying namespace "webhook-6565-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.048 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":83,"skipped":1481,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:26:32.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:26:49.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2009" for this suite. • [SLOW TEST:16.210 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":84,"skipped":1483,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:26:49.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 00:26:49.177: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config version' Apr 28 00:26:49.323: INFO: stderr: "" Apr 28 00:26:49.323: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.0.779+84dc7046797aad\", GitCommit:\"84dc7046797aad80f258b6740a98e79199c8bb4d\", GitTreeState:\"clean\", BuildDate:\"2020-03-15T16:56:42Z\", GoVersion:\"go1.13.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:09:19Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:26:49.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5049" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":275,"completed":85,"skipped":1520,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:26:49.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7719.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-7719.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7719.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7719.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-7719.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7719.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 28 00:26:55.474: INFO: DNS probes using dns-7719/dns-test-70edeb37-bb6b-4dff-8cfc-0c4f508aca10 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:26:55.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7719" for this suite. • [SLOW TEST:6.426 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":86,"skipped":1541,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:26:55.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-39e609d7-4fde-4b71-a206-25dcb329a9d5 STEP: Creating a pod to test consume configMaps Apr 28 00:26:56.175: INFO: Waiting up to 5m0s for pod "pod-configmaps-25d7b384-a8c7-4139-a627-f0a616069929" in namespace "configmap-4190" to be "Succeeded or Failed" Apr 28 00:26:56.193: INFO: Pod "pod-configmaps-25d7b384-a8c7-4139-a627-f0a616069929": Phase="Pending", Reason="", readiness=false. Elapsed: 18.183341ms Apr 28 00:26:58.279: INFO: Pod "pod-configmaps-25d7b384-a8c7-4139-a627-f0a616069929": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104374688s Apr 28 00:27:00.283: INFO: Pod "pod-configmaps-25d7b384-a8c7-4139-a627-f0a616069929": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.108262132s STEP: Saw pod success Apr 28 00:27:00.283: INFO: Pod "pod-configmaps-25d7b384-a8c7-4139-a627-f0a616069929" satisfied condition "Succeeded or Failed" Apr 28 00:27:00.286: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-25d7b384-a8c7-4139-a627-f0a616069929 container configmap-volume-test: STEP: delete the pod Apr 28 00:27:00.323: INFO: Waiting for pod pod-configmaps-25d7b384-a8c7-4139-a627-f0a616069929 to disappear Apr 28 00:27:00.328: INFO: Pod pod-configmaps-25d7b384-a8c7-4139-a627-f0a616069929 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:27:00.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4190" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":87,"skipped":1546,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:27:00.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 00:27:00.455: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Apr 28 00:27:03.365: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1396 create -f -' Apr 28 00:27:08.948: INFO: stderr: "" Apr 28 00:27:08.948: INFO: stdout: "e2e-test-crd-publish-openapi-6459-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 28 00:27:08.948: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1396 delete e2e-test-crd-publish-openapi-6459-crds test-foo' Apr 28 00:27:09.076: INFO: stderr: "" Apr 28 00:27:09.076: INFO: stdout: "e2e-test-crd-publish-openapi-6459-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Apr 28 00:27:09.076: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1396 apply -f -' Apr 28 00:27:09.313: INFO: stderr: "" Apr 28 00:27:09.313: INFO: stdout: "e2e-test-crd-publish-openapi-6459-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 28 00:27:09.314: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1396 delete e2e-test-crd-publish-openapi-6459-crds test-foo' Apr 28 00:27:09.418: INFO: stderr: "" Apr 28 00:27:09.418: INFO: stdout: "e2e-test-crd-publish-openapi-6459-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Apr 28 00:27:09.419: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1396 create -f -' Apr 28 00:27:09.644: INFO: rc: 1 Apr 28 00:27:09.644: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1396 apply -f -' Apr 28 00:27:09.879: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Apr 28 00:27:09.879: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1396 create -f -' Apr 28 00:27:10.103: INFO: rc: 1 Apr 28 00:27:10.103: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1396 apply -f -' Apr 28 00:27:10.342: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Apr 28 00:27:10.342: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6459-crds' Apr 28 00:27:10.562: INFO: stderr: "" Apr 28 00:27:10.562: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6459-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Apr 28 00:27:10.562: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6459-crds.metadata' Apr 28 00:27:10.798: INFO: stderr: "" Apr 28 00:27:10.798: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6459-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Apr 28 00:27:10.798: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6459-crds.spec' Apr 28 00:27:11.028: INFO: stderr: "" Apr 28 00:27:11.028: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6459-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Apr 28 00:27:11.028: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6459-crds.spec.bars' Apr 28 00:27:11.265: INFO: stderr: "" Apr 28 00:27:11.266: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6459-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Apr 28 00:27:11.266: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6459-crds.spec.bars2' Apr 28 00:27:11.470: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:27:13.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1396" for this suite. • [SLOW TEST:13.049 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":88,"skipped":1559,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:27:13.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating secret secrets-2637/secret-test-cf65b770-e8fc-44a9-bc1b-9b192a3174c9 STEP: Creating a pod to test consume secrets Apr 28 00:27:13.441: INFO: Waiting up to 5m0s for pod "pod-configmaps-a0983adf-f621-442e-8013-8cdf227e086b" in namespace "secrets-2637" to be "Succeeded or Failed" Apr 28 00:27:13.445: INFO: Pod "pod-configmaps-a0983adf-f621-442e-8013-8cdf227e086b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.517995ms Apr 28 00:27:15.584: INFO: Pod "pod-configmaps-a0983adf-f621-442e-8013-8cdf227e086b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143107217s Apr 28 00:27:17.589: INFO: Pod "pod-configmaps-a0983adf-f621-442e-8013-8cdf227e086b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.147203459s Apr 28 00:27:19.592: INFO: Pod "pod-configmaps-a0983adf-f621-442e-8013-8cdf227e086b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.151075526s STEP: Saw pod success Apr 28 00:27:19.592: INFO: Pod "pod-configmaps-a0983adf-f621-442e-8013-8cdf227e086b" satisfied condition "Succeeded or Failed" Apr 28 00:27:19.596: INFO: Trying to get logs from node latest-worker pod pod-configmaps-a0983adf-f621-442e-8013-8cdf227e086b container env-test: STEP: delete the pod Apr 28 00:27:19.643: INFO: Waiting for pod pod-configmaps-a0983adf-f621-442e-8013-8cdf227e086b to disappear Apr 28 00:27:19.655: INFO: Pod pod-configmaps-a0983adf-f621-442e-8013-8cdf227e086b no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:27:19.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2637" for this suite. • [SLOW TEST:6.275 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":89,"skipped":1604,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:27:19.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 28 00:27:19.777: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:27:19.780: INFO: Number of nodes with available pods: 0 Apr 28 00:27:19.780: INFO: Node latest-worker is running more than one daemon pod Apr 28 00:27:20.794: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:27:20.797: INFO: Number of nodes with available pods: 0 Apr 28 00:27:20.797: INFO: Node latest-worker is running more than one daemon pod Apr 28 00:27:21.785: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:27:21.788: INFO: Number of nodes with available pods: 0 Apr 28 00:27:21.788: INFO: Node latest-worker is running more than one daemon pod Apr 28 00:27:22.784: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:27:22.788: INFO: Number of nodes with available pods: 1 Apr 28 00:27:22.788: INFO: Node latest-worker is running more than one daemon pod Apr 28 00:27:23.785: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:27:23.789: INFO: Number of nodes with available pods: 2 Apr 28 00:27:23.789: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 28 00:27:23.820: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:27:23.832: INFO: Number of nodes with available pods: 2 Apr 28 00:27:23.832: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-222, will wait for the garbage collector to delete the pods Apr 28 00:27:24.912: INFO: Deleting DaemonSet.extensions daemon-set took: 4.705381ms Apr 28 00:27:25.213: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.391918ms Apr 28 00:27:32.816: INFO: Number of nodes with available pods: 0 Apr 28 00:27:32.816: INFO: Number of running nodes: 0, number of available pods: 0 Apr 28 00:27:32.819: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-222/daemonsets","resourceVersion":"11584804"},"items":null} Apr 28 00:27:32.823: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-222/pods","resourceVersion":"11584804"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:27:32.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-222" for this suite. • [SLOW TEST:13.178 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":90,"skipped":1618,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:27:32.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-8c7cc1b2-41ba-4640-8f5b-3886793c954f STEP: Creating a pod to test consume configMaps Apr 28 00:27:32.944: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8b0b316e-53a2-400a-bee9-c5d29cce4976" in namespace "projected-6055" to be "Succeeded or Failed" Apr 28 00:27:32.959: INFO: Pod "pod-projected-configmaps-8b0b316e-53a2-400a-bee9-c5d29cce4976": Phase="Pending", Reason="", readiness=false. Elapsed: 14.946811ms Apr 28 00:27:34.963: INFO: Pod "pod-projected-configmaps-8b0b316e-53a2-400a-bee9-c5d29cce4976": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018693469s Apr 28 00:27:36.967: INFO: Pod "pod-projected-configmaps-8b0b316e-53a2-400a-bee9-c5d29cce4976": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022913791s STEP: Saw pod success Apr 28 00:27:36.967: INFO: Pod "pod-projected-configmaps-8b0b316e-53a2-400a-bee9-c5d29cce4976" satisfied condition "Succeeded or Failed" Apr 28 00:27:36.971: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-8b0b316e-53a2-400a-bee9-c5d29cce4976 container projected-configmap-volume-test: STEP: delete the pod Apr 28 00:27:36.986: INFO: Waiting for pod pod-projected-configmaps-8b0b316e-53a2-400a-bee9-c5d29cce4976 to disappear Apr 28 00:27:37.008: INFO: Pod pod-projected-configmaps-8b0b316e-53a2-400a-bee9-c5d29cce4976 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:27:37.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6055" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":91,"skipped":1630,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:27:37.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Apr 28 00:27:44.484: INFO: 4 pods remaining Apr 28 00:27:44.484: INFO: 0 pods has nil DeletionTimestamp Apr 28 00:27:44.484: INFO: Apr 28 00:27:45.460: INFO: 0 pods remaining Apr 28 00:27:45.460: INFO: 0 pods has nil DeletionTimestamp Apr 28 00:27:45.460: INFO: STEP: Gathering metrics W0428 00:27:46.028051 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 28 00:27:46.028: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:27:46.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5981" for this suite. • [SLOW TEST:9.002 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":92,"skipped":1655,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:27:46.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 28 00:27:46.310: INFO: Waiting up to 5m0s for pod "downwardapi-volume-350056a8-d089-4185-8b0f-ac720e436750" in namespace "projected-5937" to be "Succeeded or Failed" Apr 28 00:27:46.445: INFO: Pod "downwardapi-volume-350056a8-d089-4185-8b0f-ac720e436750": Phase="Pending", Reason="", readiness=false. Elapsed: 134.633589ms Apr 28 00:27:48.449: INFO: Pod "downwardapi-volume-350056a8-d089-4185-8b0f-ac720e436750": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138883886s Apr 28 00:27:50.510: INFO: Pod "downwardapi-volume-350056a8-d089-4185-8b0f-ac720e436750": Phase="Pending", Reason="", readiness=false. Elapsed: 4.19972398s Apr 28 00:27:52.513: INFO: Pod "downwardapi-volume-350056a8-d089-4185-8b0f-ac720e436750": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.202636215s STEP: Saw pod success Apr 28 00:27:52.513: INFO: Pod "downwardapi-volume-350056a8-d089-4185-8b0f-ac720e436750" satisfied condition "Succeeded or Failed" Apr 28 00:27:52.515: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-350056a8-d089-4185-8b0f-ac720e436750 container client-container: STEP: delete the pod Apr 28 00:27:52.531: INFO: Waiting for pod downwardapi-volume-350056a8-d089-4185-8b0f-ac720e436750 to disappear Apr 28 00:27:52.542: INFO: Pod downwardapi-volume-350056a8-d089-4185-8b0f-ac720e436750 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:27:52.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5937" for this suite. • [SLOW TEST:6.514 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":93,"skipped":1658,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:27:52.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 28 00:27:52.627: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d0d5c9f7-ba37-4c05-b493-26d55a9a6fe9" in namespace "downward-api-4658" to be "Succeeded or Failed" Apr 28 00:27:52.650: INFO: Pod "downwardapi-volume-d0d5c9f7-ba37-4c05-b493-26d55a9a6fe9": Phase="Pending", Reason="", readiness=false. Elapsed: 23.466416ms Apr 28 00:27:54.655: INFO: Pod "downwardapi-volume-d0d5c9f7-ba37-4c05-b493-26d55a9a6fe9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0281524s Apr 28 00:27:56.659: INFO: Pod "downwardapi-volume-d0d5c9f7-ba37-4c05-b493-26d55a9a6fe9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031493612s STEP: Saw pod success Apr 28 00:27:56.659: INFO: Pod "downwardapi-volume-d0d5c9f7-ba37-4c05-b493-26d55a9a6fe9" satisfied condition "Succeeded or Failed" Apr 28 00:27:56.661: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-d0d5c9f7-ba37-4c05-b493-26d55a9a6fe9 container client-container: STEP: delete the pod Apr 28 00:27:56.752: INFO: Waiting for pod downwardapi-volume-d0d5c9f7-ba37-4c05-b493-26d55a9a6fe9 to disappear Apr 28 00:27:56.872: INFO: Pod downwardapi-volume-d0d5c9f7-ba37-4c05-b493-26d55a9a6fe9 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:27:56.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4658" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":94,"skipped":1672,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:27:56.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 28 00:27:56.995: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 28 00:28:02.003: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:28:02.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3001" for this suite. • [SLOW TEST:5.238 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":95,"skipped":1677,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:28:02.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 00:28:02.206: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-c03d421a-b6f7-430f-a754-49362d8c551f" in namespace "security-context-test-5971" to be "Succeeded or Failed" Apr 28 00:28:02.210: INFO: Pod "alpine-nnp-false-c03d421a-b6f7-430f-a754-49362d8c551f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.541582ms Apr 28 00:28:04.213: INFO: Pod "alpine-nnp-false-c03d421a-b6f7-430f-a754-49362d8c551f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006886192s Apr 28 00:28:06.217: INFO: Pod "alpine-nnp-false-c03d421a-b6f7-430f-a754-49362d8c551f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010581499s Apr 28 00:28:06.217: INFO: Pod "alpine-nnp-false-c03d421a-b6f7-430f-a754-49362d8c551f" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:28:06.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5971" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":96,"skipped":1718,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:28:06.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 00:28:06.345: INFO: The status of Pod test-webserver-78289640-0a9a-4bca-a11d-cd895305ad80 is Pending, waiting for it to be Running (with Ready = true) Apr 28 00:28:08.369: INFO: The status of Pod test-webserver-78289640-0a9a-4bca-a11d-cd895305ad80 is Pending, waiting for it to be Running (with Ready = true) Apr 28 00:28:10.348: INFO: The status of Pod test-webserver-78289640-0a9a-4bca-a11d-cd895305ad80 is Running (Ready = false) Apr 28 00:28:12.348: INFO: The status of Pod test-webserver-78289640-0a9a-4bca-a11d-cd895305ad80 is Running (Ready = false) Apr 28 00:28:14.348: INFO: The status of Pod test-webserver-78289640-0a9a-4bca-a11d-cd895305ad80 is Running (Ready = false) Apr 28 00:28:16.349: INFO: The status of Pod test-webserver-78289640-0a9a-4bca-a11d-cd895305ad80 is Running (Ready = false) Apr 28 00:28:18.349: INFO: The status of Pod test-webserver-78289640-0a9a-4bca-a11d-cd895305ad80 is Running (Ready = false) Apr 28 00:28:20.348: INFO: The status of Pod test-webserver-78289640-0a9a-4bca-a11d-cd895305ad80 is Running (Ready = false) Apr 28 00:28:22.348: INFO: The status of Pod test-webserver-78289640-0a9a-4bca-a11d-cd895305ad80 is Running (Ready = false) Apr 28 00:28:24.349: INFO: The status of Pod test-webserver-78289640-0a9a-4bca-a11d-cd895305ad80 is Running (Ready = false) Apr 28 00:28:26.349: INFO: The status of Pod test-webserver-78289640-0a9a-4bca-a11d-cd895305ad80 is Running (Ready = false) Apr 28 00:28:28.349: INFO: The status of Pod test-webserver-78289640-0a9a-4bca-a11d-cd895305ad80 is Running (Ready = false) Apr 28 00:28:30.349: INFO: The status of Pod test-webserver-78289640-0a9a-4bca-a11d-cd895305ad80 is Running (Ready = false) Apr 28 00:28:32.348: INFO: The status of Pod test-webserver-78289640-0a9a-4bca-a11d-cd895305ad80 is Running (Ready = false) Apr 28 00:28:34.348: INFO: The status of Pod test-webserver-78289640-0a9a-4bca-a11d-cd895305ad80 is Running (Ready = true) Apr 28 00:28:34.350: INFO: Container started at 2020-04-28 00:28:09 +0000 UTC, pod became ready at 2020-04-28 00:28:32 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:28:34.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-244" for this suite. • [SLOW TEST:28.139 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":97,"skipped":1757,"failed":0} S ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:28:34.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name secret-emptykey-test-aaadc919-d04e-4152-b2b3-350260660758 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:28:34.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2722" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":98,"skipped":1758,"failed":0} S ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:28:34.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 00:28:34.506: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:28:38.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-788" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":99,"skipped":1759,"failed":0} SSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:28:38.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 00:28:38.737: INFO: Creating ReplicaSet my-hostname-basic-dca0e3d8-af7a-4bf3-95a1-9cfa8cee9a86 Apr 28 00:28:38.774: INFO: Pod name my-hostname-basic-dca0e3d8-af7a-4bf3-95a1-9cfa8cee9a86: Found 0 pods out of 1 Apr 28 00:28:43.795: INFO: Pod name my-hostname-basic-dca0e3d8-af7a-4bf3-95a1-9cfa8cee9a86: Found 1 pods out of 1 Apr 28 00:28:43.795: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-dca0e3d8-af7a-4bf3-95a1-9cfa8cee9a86" is running Apr 28 00:28:43.801: INFO: Pod "my-hostname-basic-dca0e3d8-af7a-4bf3-95a1-9cfa8cee9a86-4rfp5" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-28 00:28:38 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-28 00:28:42 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-28 00:28:42 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-28 00:28:38 +0000 UTC Reason: Message:}]) Apr 28 00:28:43.801: INFO: Trying to dial the pod Apr 28 00:28:48.813: INFO: Controller my-hostname-basic-dca0e3d8-af7a-4bf3-95a1-9cfa8cee9a86: Got expected result from replica 1 [my-hostname-basic-dca0e3d8-af7a-4bf3-95a1-9cfa8cee9a86-4rfp5]: "my-hostname-basic-dca0e3d8-af7a-4bf3-95a1-9cfa8cee9a86-4rfp5", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:28:48.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2794" for this suite. • [SLOW TEST:10.130 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":100,"skipped":1765,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:28:48.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:28:52.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2179" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":101,"skipped":1820,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:28:52.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 28 00:28:53.953: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fb84bdfb-2f1b-4861-837d-f80147d8feba" in namespace "downward-api-1180" to be "Succeeded or Failed" Apr 28 00:28:53.960: INFO: Pod "downwardapi-volume-fb84bdfb-2f1b-4861-837d-f80147d8feba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.54399ms Apr 28 00:28:55.964: INFO: Pod "downwardapi-volume-fb84bdfb-2f1b-4861-837d-f80147d8feba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010180216s Apr 28 00:28:57.967: INFO: Pod "downwardapi-volume-fb84bdfb-2f1b-4861-837d-f80147d8feba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013634232s STEP: Saw pod success Apr 28 00:28:57.967: INFO: Pod "downwardapi-volume-fb84bdfb-2f1b-4861-837d-f80147d8feba" satisfied condition "Succeeded or Failed" Apr 28 00:28:57.970: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-fb84bdfb-2f1b-4861-837d-f80147d8feba container client-container: STEP: delete the pod Apr 28 00:28:58.034: INFO: Waiting for pod downwardapi-volume-fb84bdfb-2f1b-4861-837d-f80147d8feba to disappear Apr 28 00:28:58.080: INFO: Pod downwardapi-volume-fb84bdfb-2f1b-4861-837d-f80147d8feba no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:28:58.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1180" for this suite. • [SLOW TEST:5.206 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":102,"skipped":1842,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:28:58.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 28 00:28:58.339: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:29:05.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6068" for this suite. • [SLOW TEST:7.863 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":103,"skipped":1862,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:29:06.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 28 00:29:06.097: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-516 /api/v1/namespaces/watch-516/configmaps/e2e-watch-test-label-changed 61ce2636-4ab6-4cce-8055-b39a83b48efe 11585561 0 2020-04-28 00:29:06 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 28 00:29:06.097: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-516 /api/v1/namespaces/watch-516/configmaps/e2e-watch-test-label-changed 61ce2636-4ab6-4cce-8055-b39a83b48efe 11585562 0 2020-04-28 00:29:06 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 28 00:29:06.097: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-516 /api/v1/namespaces/watch-516/configmaps/e2e-watch-test-label-changed 61ce2636-4ab6-4cce-8055-b39a83b48efe 11585563 0 2020-04-28 00:29:06 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 28 00:29:16.139: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-516 /api/v1/namespaces/watch-516/configmaps/e2e-watch-test-label-changed 61ce2636-4ab6-4cce-8055-b39a83b48efe 11585611 0 2020-04-28 00:29:06 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 28 00:29:16.139: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-516 /api/v1/namespaces/watch-516/configmaps/e2e-watch-test-label-changed 61ce2636-4ab6-4cce-8055-b39a83b48efe 11585612 0 2020-04-28 00:29:06 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 28 00:29:16.139: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-516 /api/v1/namespaces/watch-516/configmaps/e2e-watch-test-label-changed 61ce2636-4ab6-4cce-8055-b39a83b48efe 11585613 0 2020-04-28 00:29:06 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:29:16.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-516" for this suite. • [SLOW TEST:10.133 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":104,"skipped":1867,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:29:16.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 28 00:29:16.271: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3f4e8ae4-c438-43dc-99cc-34a66d3fa2ba" in namespace "projected-5743" to be "Succeeded or Failed" Apr 28 00:29:16.278: INFO: Pod "downwardapi-volume-3f4e8ae4-c438-43dc-99cc-34a66d3fa2ba": Phase="Pending", Reason="", readiness=false. Elapsed: 7.081624ms Apr 28 00:29:18.282: INFO: Pod "downwardapi-volume-3f4e8ae4-c438-43dc-99cc-34a66d3fa2ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011626102s Apr 28 00:29:20.286: INFO: Pod "downwardapi-volume-3f4e8ae4-c438-43dc-99cc-34a66d3fa2ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01564741s STEP: Saw pod success Apr 28 00:29:20.286: INFO: Pod "downwardapi-volume-3f4e8ae4-c438-43dc-99cc-34a66d3fa2ba" satisfied condition "Succeeded or Failed" Apr 28 00:29:20.289: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-3f4e8ae4-c438-43dc-99cc-34a66d3fa2ba container client-container: STEP: delete the pod Apr 28 00:29:20.402: INFO: Waiting for pod downwardapi-volume-3f4e8ae4-c438-43dc-99cc-34a66d3fa2ba to disappear Apr 28 00:29:20.415: INFO: Pod downwardapi-volume-3f4e8ae4-c438-43dc-99cc-34a66d3fa2ba no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:29:20.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5743" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":105,"skipped":1868,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:29:20.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:29:36.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5395" for this suite. • [SLOW TEST:16.214 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":106,"skipped":1891,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:29:36.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Apr 28 00:29:36.661: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6388' Apr 28 00:29:36.977: INFO: stderr: "" Apr 28 00:29:36.977: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 28 00:29:37.982: INFO: Selector matched 1 pods for map[app:agnhost] Apr 28 00:29:37.982: INFO: Found 0 / 1 Apr 28 00:29:38.982: INFO: Selector matched 1 pods for map[app:agnhost] Apr 28 00:29:38.982: INFO: Found 0 / 1 Apr 28 00:29:39.982: INFO: Selector matched 1 pods for map[app:agnhost] Apr 28 00:29:39.982: INFO: Found 0 / 1 Apr 28 00:29:40.981: INFO: Selector matched 1 pods for map[app:agnhost] Apr 28 00:29:40.981: INFO: Found 1 / 1 Apr 28 00:29:40.981: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Apr 28 00:29:40.984: INFO: Selector matched 1 pods for map[app:agnhost] Apr 28 00:29:40.984: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 28 00:29:40.984: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config patch pod agnhost-master-6bqr9 --namespace=kubectl-6388 -p {"metadata":{"annotations":{"x":"y"}}}' Apr 28 00:29:41.090: INFO: stderr: "" Apr 28 00:29:41.090: INFO: stdout: "pod/agnhost-master-6bqr9 patched\n" STEP: checking annotations Apr 28 00:29:41.095: INFO: Selector matched 1 pods for map[app:agnhost] Apr 28 00:29:41.095: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:29:41.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6388" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":275,"completed":107,"skipped":1905,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:29:41.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 28 00:29:41.151: INFO: PodSpec: initContainers in spec.initContainers Apr 28 00:30:31.226: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-12695c95-7b0a-4bf1-8097-1cad6eb26dec", GenerateName:"", Namespace:"init-container-519", SelfLink:"/api/v1/namespaces/init-container-519/pods/pod-init-12695c95-7b0a-4bf1-8097-1cad6eb26dec", UID:"e4596a44-a04d-4521-a5b2-e0a59ae065d9", ResourceVersion:"11585986", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63723630581, loc:(*time.Location)(0x7b1e080)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"151492259"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-w9d2g", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002cd2fc0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-w9d2g", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-w9d2g", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-w9d2g", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003a40648), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0029092d0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003a406d0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003a406f0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003a406f8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc003a406fc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630581, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630581, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630581, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630581, loc:(*time.Location)(0x7b1e080)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.13", PodIP:"10.244.2.101", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.101"}}, StartTime:(*v1.Time)(0xc004714e80), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0047153e0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0029093b0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://99939564bd72a097a503dbde2bab4d679eb4113fb95af86a730c6cdf17788900", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004715420), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004715140), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc003a4077f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:30:31.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-519" for this suite. • [SLOW TEST:50.251 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":108,"skipped":1928,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:30:31.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-vbpx STEP: Creating a pod to test atomic-volume-subpath Apr 28 00:30:31.534: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-vbpx" in namespace "subpath-74" to be "Succeeded or Failed" Apr 28 00:30:31.537: INFO: Pod "pod-subpath-test-configmap-vbpx": Phase="Pending", Reason="", readiness=false. Elapsed: 3.159409ms Apr 28 00:30:33.541: INFO: Pod "pod-subpath-test-configmap-vbpx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007128594s Apr 28 00:30:35.545: INFO: Pod "pod-subpath-test-configmap-vbpx": Phase="Running", Reason="", readiness=true. Elapsed: 4.011052815s Apr 28 00:30:37.550: INFO: Pod "pod-subpath-test-configmap-vbpx": Phase="Running", Reason="", readiness=true. Elapsed: 6.015365144s Apr 28 00:30:39.554: INFO: Pod "pod-subpath-test-configmap-vbpx": Phase="Running", Reason="", readiness=true. Elapsed: 8.019508922s Apr 28 00:30:41.558: INFO: Pod "pod-subpath-test-configmap-vbpx": Phase="Running", Reason="", readiness=true. Elapsed: 10.023678884s Apr 28 00:30:43.562: INFO: Pod "pod-subpath-test-configmap-vbpx": Phase="Running", Reason="", readiness=true. Elapsed: 12.027827758s Apr 28 00:30:45.566: INFO: Pod "pod-subpath-test-configmap-vbpx": Phase="Running", Reason="", readiness=true. Elapsed: 14.031922574s Apr 28 00:30:47.571: INFO: Pod "pod-subpath-test-configmap-vbpx": Phase="Running", Reason="", readiness=true. Elapsed: 16.036545216s Apr 28 00:30:49.575: INFO: Pod "pod-subpath-test-configmap-vbpx": Phase="Running", Reason="", readiness=true. Elapsed: 18.040578734s Apr 28 00:30:51.579: INFO: Pod "pod-subpath-test-configmap-vbpx": Phase="Running", Reason="", readiness=true. Elapsed: 20.045140946s Apr 28 00:30:53.584: INFO: Pod "pod-subpath-test-configmap-vbpx": Phase="Running", Reason="", readiness=true. Elapsed: 22.049351555s Apr 28 00:30:55.588: INFO: Pod "pod-subpath-test-configmap-vbpx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.053316197s STEP: Saw pod success Apr 28 00:30:55.588: INFO: Pod "pod-subpath-test-configmap-vbpx" satisfied condition "Succeeded or Failed" Apr 28 00:30:55.591: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-vbpx container test-container-subpath-configmap-vbpx: STEP: delete the pod Apr 28 00:30:55.657: INFO: Waiting for pod pod-subpath-test-configmap-vbpx to disappear Apr 28 00:30:55.683: INFO: Pod pod-subpath-test-configmap-vbpx no longer exists STEP: Deleting pod pod-subpath-test-configmap-vbpx Apr 28 00:30:55.683: INFO: Deleting pod "pod-subpath-test-configmap-vbpx" in namespace "subpath-74" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:30:55.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-74" for this suite. • [SLOW TEST:24.345 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":109,"skipped":1938,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:30:55.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0428 00:30:56.791353 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 28 00:30:56.791: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:30:56.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1738" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":110,"skipped":1943,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:30:56.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting the proxy server Apr 28 00:30:56.972: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:30:57.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7608" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":275,"completed":111,"skipped":1947,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:30:57.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 00:30:57.118: INFO: Creating deployment "test-recreate-deployment" Apr 28 00:30:57.126: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 28 00:30:57.156: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 28 00:30:59.195: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 28 00:30:59.198: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630657, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630657, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630657, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630657, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-846c7dd955\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 28 00:31:01.202: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630657, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630657, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630657, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630657, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-846c7dd955\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 28 00:31:03.202: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 28 00:31:03.210: INFO: Updating deployment test-recreate-deployment Apr 28 00:31:03.210: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 28 00:31:03.808: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-5248 /apis/apps/v1/namespaces/deployment-5248/deployments/test-recreate-deployment d1ebed83-3258-4969-8bd5-a4aa9bab181b 11586222 2 2020-04-28 00:30:57 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0039517a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-28 00:31:03 +0000 UTC,LastTransitionTime:2020-04-28 00:31:03 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-04-28 00:31:03 +0000 UTC,LastTransitionTime:2020-04-28 00:30:57 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Apr 28 00:31:03.820: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-5248 /apis/apps/v1/namespaces/deployment-5248/replicasets/test-recreate-deployment-5f94c574ff 5df47a2d-c26e-4dcf-ba82-66713823caf6 11586219 1 2020-04-28 00:31:03 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment d1ebed83-3258-4969-8bd5-a4aa9bab181b 0xc00392acf7 0xc00392acf8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00392ad58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 28 00:31:03.820: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 28 00:31:03.820: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-846c7dd955 deployment-5248 /apis/apps/v1/namespaces/deployment-5248/replicasets/test-recreate-deployment-846c7dd955 d9ffcd28-ef72-4537-9493-351d360e0e04 11586210 2 2020-04-28 00:30:57 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment d1ebed83-3258-4969-8bd5-a4aa9bab181b 0xc00392adc7 0xc00392adc8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 846c7dd955,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00392ae38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 28 00:31:03.823: INFO: Pod "test-recreate-deployment-5f94c574ff-c5jzm" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-c5jzm test-recreate-deployment-5f94c574ff- deployment-5248 /api/v1/namespaces/deployment-5248/pods/test-recreate-deployment-5f94c574ff-c5jzm f2c051e3-335e-4f95-9cac-ed532a965b19 11586224 0 2020-04-28 00:31:03 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 5df47a2d-c26e-4dcf-ba82-66713823caf6 0xc003951bb7 0xc003951bb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m4m2p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m4m2p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m4m2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 00:31:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 00:31:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 00:31:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 00:31:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-28 00:31:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:31:03.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5248" for this suite. • [SLOW TEST:6.750 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":112,"skipped":1962,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:31:03.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:31:37.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-821" for this suite. • [SLOW TEST:33.595 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":113,"skipped":1992,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:31:37.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-992 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating statefulset ss in namespace statefulset-992 Apr 28 00:31:37.504: INFO: Found 0 stateful pods, waiting for 1 Apr 28 00:31:47.516: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 28 00:31:47.535: INFO: Deleting all statefulset in ns statefulset-992 Apr 28 00:31:47.540: INFO: Scaling statefulset ss to 0 Apr 28 00:32:07.593: INFO: Waiting for statefulset status.replicas updated to 0 Apr 28 00:32:07.596: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:32:07.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-992" for this suite. • [SLOW TEST:30.205 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":114,"skipped":2013,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:32:07.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:32:07.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4438" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":115,"skipped":2025,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:32:07.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 28 00:32:08.254: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 28 00:32:10.287: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630728, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630728, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630728, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630728, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 28 00:32:13.302: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:32:13.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7006" for this suite. STEP: Destroying namespace "webhook-7006-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.018 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":116,"skipped":2029,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:32:13.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 00:32:14.128: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"d065f266-ff04-408d-a4db-5cf43cedf2b8", Controller:(*bool)(0xc002b8b93a), BlockOwnerDeletion:(*bool)(0xc002b8b93b)}} Apr 28 00:32:14.368: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"18603e56-fc77-4412-9562-ced4fa15e0a7", Controller:(*bool)(0xc003c86c6a), BlockOwnerDeletion:(*bool)(0xc003c86c6b)}} Apr 28 00:32:14.392: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"d8e24866-866c-4ae4-adf7-1308774a505f", Controller:(*bool)(0xc003c86e32), BlockOwnerDeletion:(*bool)(0xc003c86e33)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:32:19.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8288" for this suite. • [SLOW TEST:5.717 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":117,"skipped":2034,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:32:19.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 28 00:32:19.559: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 28 00:32:19.614: INFO: Waiting for terminating namespaces to be deleted... Apr 28 00:32:19.626: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 28 00:32:19.630: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 28 00:32:19.630: INFO: Container kindnet-cni ready: true, restart count 0 Apr 28 00:32:19.630: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 28 00:32:19.630: INFO: Container kube-proxy ready: true, restart count 0 Apr 28 00:32:19.630: INFO: pod-qos-class-9c9d140d-d673-48f1-9e1c-641a6fb200a3 from pods-4438 started at 2020-04-28 00:32:07 +0000 UTC (1 container statuses recorded) Apr 28 00:32:19.630: INFO: Container agnhost ready: false, restart count 0 Apr 28 00:32:19.630: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 28 00:32:19.640: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 28 00:32:19.640: INFO: Container kindnet-cni ready: true, restart count 0 Apr 28 00:32:19.640: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 28 00:32:19.640: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-384128b4-ca5b-4c3c-a8e2-e1468c9ac046 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-384128b4-ca5b-4c3c-a8e2-e1468c9ac046 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-384128b4-ca5b-4c3c-a8e2-e1468c9ac046 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:32:27.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4906" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:8.290 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":275,"completed":118,"skipped":2056,"failed":0} [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:32:27.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 00:32:27.852: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:32:34.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1553" for this suite. • [SLOW TEST:6.384 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":275,"completed":119,"skipped":2056,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:32:34.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 28 00:32:34.238: INFO: Waiting up to 5m0s for pod "downwardapi-volume-76d29f4a-5ede-4b4e-b893-07644c9987b2" in namespace "downward-api-470" to be "Succeeded or Failed" Apr 28 00:32:34.255: INFO: Pod "downwardapi-volume-76d29f4a-5ede-4b4e-b893-07644c9987b2": Phase="Pending", Reason="", readiness=false. Elapsed: 16.48424ms Apr 28 00:32:36.320: INFO: Pod "downwardapi-volume-76d29f4a-5ede-4b4e-b893-07644c9987b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081746251s Apr 28 00:32:38.324: INFO: Pod "downwardapi-volume-76d29f4a-5ede-4b4e-b893-07644c9987b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.086248788s STEP: Saw pod success Apr 28 00:32:38.324: INFO: Pod "downwardapi-volume-76d29f4a-5ede-4b4e-b893-07644c9987b2" satisfied condition "Succeeded or Failed" Apr 28 00:32:38.328: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-76d29f4a-5ede-4b4e-b893-07644c9987b2 container client-container: STEP: delete the pod Apr 28 00:32:38.364: INFO: Waiting for pod downwardapi-volume-76d29f4a-5ede-4b4e-b893-07644c9987b2 to disappear Apr 28 00:32:38.380: INFO: Pod downwardapi-volume-76d29f4a-5ede-4b4e-b893-07644c9987b2 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:32:38.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-470" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":120,"skipped":2060,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:32:38.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-569 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-569 I0428 00:32:38.536564 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-569, replica count: 2 I0428 00:32:41.587050 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0428 00:32:44.587293 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 28 00:32:44.587: INFO: Creating new exec pod Apr 28 00:32:49.600: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-569 execpod8pgld -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 28 00:32:49.838: INFO: stderr: "I0428 00:32:49.742642 1485 log.go:172] (0xc000a54000) (0xc0009b8000) Create stream\nI0428 00:32:49.742708 1485 log.go:172] (0xc000a54000) (0xc0009b8000) Stream added, broadcasting: 1\nI0428 00:32:49.746149 1485 log.go:172] (0xc000a54000) Reply frame received for 1\nI0428 00:32:49.746299 1485 log.go:172] (0xc000a54000) (0xc000a1e000) Create stream\nI0428 00:32:49.746327 1485 log.go:172] (0xc000a54000) (0xc000a1e000) Stream added, broadcasting: 3\nI0428 00:32:49.747781 1485 log.go:172] (0xc000a54000) Reply frame received for 3\nI0428 00:32:49.747879 1485 log.go:172] (0xc000a54000) (0xc000a0a000) Create stream\nI0428 00:32:49.747951 1485 log.go:172] (0xc000a54000) (0xc000a0a000) Stream added, broadcasting: 5\nI0428 00:32:49.749728 1485 log.go:172] (0xc000a54000) Reply frame received for 5\nI0428 00:32:49.829581 1485 log.go:172] (0xc000a54000) Data frame received for 5\nI0428 00:32:49.829618 1485 log.go:172] (0xc000a0a000) (5) Data frame handling\nI0428 00:32:49.829637 1485 log.go:172] (0xc000a0a000) (5) Data frame sent\nI0428 00:32:49.829647 1485 log.go:172] (0xc000a54000) Data frame received for 5\n+ nc -zv -t -w 2 externalname-service 80\nI0428 00:32:49.829655 1485 log.go:172] (0xc000a0a000) (5) Data frame handling\nI0428 00:32:49.829693 1485 log.go:172] (0xc000a0a000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0428 00:32:49.830072 1485 log.go:172] (0xc000a54000) Data frame received for 3\nI0428 00:32:49.830094 1485 log.go:172] (0xc000a1e000) (3) Data frame handling\nI0428 00:32:49.830165 1485 log.go:172] (0xc000a54000) Data frame received for 5\nI0428 00:32:49.830195 1485 log.go:172] (0xc000a0a000) (5) Data frame handling\nI0428 00:32:49.833815 1485 log.go:172] (0xc000a54000) Data frame received for 1\nI0428 00:32:49.833837 1485 log.go:172] (0xc0009b8000) (1) Data frame handling\nI0428 00:32:49.833862 1485 log.go:172] (0xc0009b8000) (1) Data frame sent\nI0428 00:32:49.833878 1485 log.go:172] (0xc000a54000) (0xc0009b8000) Stream removed, broadcasting: 1\nI0428 00:32:49.833895 1485 log.go:172] (0xc000a54000) Go away received\nI0428 00:32:49.834199 1485 log.go:172] (0xc000a54000) (0xc0009b8000) Stream removed, broadcasting: 1\nI0428 00:32:49.834215 1485 log.go:172] (0xc000a54000) (0xc000a1e000) Stream removed, broadcasting: 3\nI0428 00:32:49.834223 1485 log.go:172] (0xc000a54000) (0xc000a0a000) Stream removed, broadcasting: 5\n" Apr 28 00:32:49.838: INFO: stdout: "" Apr 28 00:32:49.839: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-569 execpod8pgld -- /bin/sh -x -c nc -zv -t -w 2 10.96.64.85 80' Apr 28 00:32:50.060: INFO: stderr: "I0428 00:32:49.974073 1505 log.go:172] (0xc00003a630) (0xc000805360) Create stream\nI0428 00:32:49.974152 1505 log.go:172] (0xc00003a630) (0xc000805360) Stream added, broadcasting: 1\nI0428 00:32:49.977624 1505 log.go:172] (0xc00003a630) Reply frame received for 1\nI0428 00:32:49.977685 1505 log.go:172] (0xc00003a630) (0xc000016000) Create stream\nI0428 00:32:49.977707 1505 log.go:172] (0xc00003a630) (0xc000016000) Stream added, broadcasting: 3\nI0428 00:32:49.978970 1505 log.go:172] (0xc00003a630) Reply frame received for 3\nI0428 00:32:49.979017 1505 log.go:172] (0xc00003a630) (0xc0000c0000) Create stream\nI0428 00:32:49.979032 1505 log.go:172] (0xc00003a630) (0xc0000c0000) Stream added, broadcasting: 5\nI0428 00:32:49.980158 1505 log.go:172] (0xc00003a630) Reply frame received for 5\nI0428 00:32:50.055337 1505 log.go:172] (0xc00003a630) Data frame received for 3\nI0428 00:32:50.055393 1505 log.go:172] (0xc000016000) (3) Data frame handling\nI0428 00:32:50.055432 1505 log.go:172] (0xc00003a630) Data frame received for 5\nI0428 00:32:50.055459 1505 log.go:172] (0xc0000c0000) (5) Data frame handling\nI0428 00:32:50.055493 1505 log.go:172] (0xc0000c0000) (5) Data frame sent\nI0428 00:32:50.055516 1505 log.go:172] (0xc00003a630) Data frame received for 5\nI0428 00:32:50.055539 1505 log.go:172] (0xc0000c0000) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.64.85 80\nConnection to 10.96.64.85 80 port [tcp/http] succeeded!\nI0428 00:32:50.056106 1505 log.go:172] (0xc00003a630) Data frame received for 1\nI0428 00:32:50.056124 1505 log.go:172] (0xc000805360) (1) Data frame handling\nI0428 00:32:50.056133 1505 log.go:172] (0xc000805360) (1) Data frame sent\nI0428 00:32:50.056146 1505 log.go:172] (0xc00003a630) (0xc000805360) Stream removed, broadcasting: 1\nI0428 00:32:50.056172 1505 log.go:172] (0xc00003a630) Go away received\nI0428 00:32:50.056587 1505 log.go:172] (0xc00003a630) (0xc000805360) Stream removed, broadcasting: 1\nI0428 00:32:50.056610 1505 log.go:172] (0xc00003a630) (0xc000016000) Stream removed, broadcasting: 3\nI0428 00:32:50.056622 1505 log.go:172] (0xc00003a630) (0xc0000c0000) Stream removed, broadcasting: 5\n" Apr 28 00:32:50.061: INFO: stdout: "" Apr 28 00:32:50.061: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:32:50.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-569" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:11.743 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":121,"skipped":2092,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:32:50.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Apr 28 00:32:50.176: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1887' Apr 28 00:32:50.498: INFO: stderr: "" Apr 28 00:32:50.498: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 28 00:32:50.498: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1887' Apr 28 00:32:50.605: INFO: stderr: "" Apr 28 00:32:50.605: INFO: stdout: "update-demo-nautilus-64xmd update-demo-nautilus-kntzs " Apr 28 00:32:50.605: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-64xmd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1887' Apr 28 00:32:50.713: INFO: stderr: "" Apr 28 00:32:50.713: INFO: stdout: "" Apr 28 00:32:50.713: INFO: update-demo-nautilus-64xmd is created but not running Apr 28 00:32:55.714: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1887' Apr 28 00:32:55.808: INFO: stderr: "" Apr 28 00:32:55.808: INFO: stdout: "update-demo-nautilus-64xmd update-demo-nautilus-kntzs " Apr 28 00:32:55.808: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-64xmd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1887' Apr 28 00:32:55.910: INFO: stderr: "" Apr 28 00:32:55.910: INFO: stdout: "true" Apr 28 00:32:55.910: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-64xmd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1887' Apr 28 00:32:56.095: INFO: stderr: "" Apr 28 00:32:56.095: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 28 00:32:56.095: INFO: validating pod update-demo-nautilus-64xmd Apr 28 00:32:56.099: INFO: got data: { "image": "nautilus.jpg" } Apr 28 00:32:56.100: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 28 00:32:56.100: INFO: update-demo-nautilus-64xmd is verified up and running Apr 28 00:32:56.100: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kntzs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1887' Apr 28 00:32:56.192: INFO: stderr: "" Apr 28 00:32:56.192: INFO: stdout: "true" Apr 28 00:32:56.192: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kntzs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1887' Apr 28 00:32:56.273: INFO: stderr: "" Apr 28 00:32:56.273: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 28 00:32:56.273: INFO: validating pod update-demo-nautilus-kntzs Apr 28 00:32:56.276: INFO: got data: { "image": "nautilus.jpg" } Apr 28 00:32:56.276: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 28 00:32:56.276: INFO: update-demo-nautilus-kntzs is verified up and running STEP: scaling down the replication controller Apr 28 00:32:56.278: INFO: scanned /root for discovery docs: Apr 28 00:32:56.278: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-1887' Apr 28 00:32:57.393: INFO: stderr: "" Apr 28 00:32:57.393: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 28 00:32:57.393: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1887' Apr 28 00:32:57.487: INFO: stderr: "" Apr 28 00:32:57.487: INFO: stdout: "update-demo-nautilus-64xmd update-demo-nautilus-kntzs " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 28 00:33:02.487: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1887' Apr 28 00:33:02.589: INFO: stderr: "" Apr 28 00:33:02.589: INFO: stdout: "update-demo-nautilus-64xmd update-demo-nautilus-kntzs " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 28 00:33:07.590: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1887' Apr 28 00:33:07.682: INFO: stderr: "" Apr 28 00:33:07.682: INFO: stdout: "update-demo-nautilus-64xmd " Apr 28 00:33:07.682: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-64xmd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1887' Apr 28 00:33:07.775: INFO: stderr: "" Apr 28 00:33:07.775: INFO: stdout: "true" Apr 28 00:33:07.775: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-64xmd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1887' Apr 28 00:33:07.868: INFO: stderr: "" Apr 28 00:33:07.868: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 28 00:33:07.868: INFO: validating pod update-demo-nautilus-64xmd Apr 28 00:33:07.871: INFO: got data: { "image": "nautilus.jpg" } Apr 28 00:33:07.871: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 28 00:33:07.871: INFO: update-demo-nautilus-64xmd is verified up and running STEP: scaling up the replication controller Apr 28 00:33:07.874: INFO: scanned /root for discovery docs: Apr 28 00:33:07.874: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-1887' Apr 28 00:33:09.000: INFO: stderr: "" Apr 28 00:33:09.000: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 28 00:33:09.000: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1887' Apr 28 00:33:09.098: INFO: stderr: "" Apr 28 00:33:09.099: INFO: stdout: "update-demo-nautilus-64xmd update-demo-nautilus-xg69h " Apr 28 00:33:09.099: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-64xmd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1887' Apr 28 00:33:09.190: INFO: stderr: "" Apr 28 00:33:09.190: INFO: stdout: "true" Apr 28 00:33:09.190: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-64xmd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1887' Apr 28 00:33:09.281: INFO: stderr: "" Apr 28 00:33:09.281: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 28 00:33:09.281: INFO: validating pod update-demo-nautilus-64xmd Apr 28 00:33:09.314: INFO: got data: { "image": "nautilus.jpg" } Apr 28 00:33:09.314: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 28 00:33:09.314: INFO: update-demo-nautilus-64xmd is verified up and running Apr 28 00:33:09.314: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xg69h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1887' Apr 28 00:33:09.401: INFO: stderr: "" Apr 28 00:33:09.401: INFO: stdout: "" Apr 28 00:33:09.401: INFO: update-demo-nautilus-xg69h is created but not running Apr 28 00:33:14.402: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1887' Apr 28 00:33:14.506: INFO: stderr: "" Apr 28 00:33:14.506: INFO: stdout: "update-demo-nautilus-64xmd update-demo-nautilus-xg69h " Apr 28 00:33:14.506: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-64xmd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1887' Apr 28 00:33:14.593: INFO: stderr: "" Apr 28 00:33:14.593: INFO: stdout: "true" Apr 28 00:33:14.593: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-64xmd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1887' Apr 28 00:33:14.689: INFO: stderr: "" Apr 28 00:33:14.689: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 28 00:33:14.689: INFO: validating pod update-demo-nautilus-64xmd Apr 28 00:33:14.692: INFO: got data: { "image": "nautilus.jpg" } Apr 28 00:33:14.692: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 28 00:33:14.692: INFO: update-demo-nautilus-64xmd is verified up and running Apr 28 00:33:14.692: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xg69h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1887' Apr 28 00:33:14.791: INFO: stderr: "" Apr 28 00:33:14.791: INFO: stdout: "true" Apr 28 00:33:14.791: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xg69h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1887' Apr 28 00:33:14.890: INFO: stderr: "" Apr 28 00:33:14.890: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 28 00:33:14.890: INFO: validating pod update-demo-nautilus-xg69h Apr 28 00:33:14.895: INFO: got data: { "image": "nautilus.jpg" } Apr 28 00:33:14.895: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 28 00:33:14.895: INFO: update-demo-nautilus-xg69h is verified up and running STEP: using delete to clean up resources Apr 28 00:33:14.895: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1887' Apr 28 00:33:14.989: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 28 00:33:14.989: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 28 00:33:14.989: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1887' Apr 28 00:33:15.079: INFO: stderr: "No resources found in kubectl-1887 namespace.\n" Apr 28 00:33:15.079: INFO: stdout: "" Apr 28 00:33:15.079: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1887 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 28 00:33:15.206: INFO: stderr: "" Apr 28 00:33:15.206: INFO: stdout: "update-demo-nautilus-64xmd\nupdate-demo-nautilus-xg69h\n" Apr 28 00:33:15.706: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1887' Apr 28 00:33:15.800: INFO: stderr: "No resources found in kubectl-1887 namespace.\n" Apr 28 00:33:15.800: INFO: stdout: "" Apr 28 00:33:15.800: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1887 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 28 00:33:15.904: INFO: stderr: "" Apr 28 00:33:15.904: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:33:15.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1887" for this suite. • [SLOW TEST:25.778 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":275,"completed":122,"skipped":2120,"failed":0} SSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:33:15.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-1061/configmap-test-418bb3f6-3b09-4b6d-8589-68b33ae08567 STEP: Creating a pod to test consume configMaps Apr 28 00:33:16.167: INFO: Waiting up to 5m0s for pod "pod-configmaps-d421eda8-0528-4de5-a065-af55f440baeb" in namespace "configmap-1061" to be "Succeeded or Failed" Apr 28 00:33:16.181: INFO: Pod "pod-configmaps-d421eda8-0528-4de5-a065-af55f440baeb": Phase="Pending", Reason="", readiness=false. Elapsed: 14.383024ms Apr 28 00:33:18.186: INFO: Pod "pod-configmaps-d421eda8-0528-4de5-a065-af55f440baeb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018493934s Apr 28 00:33:20.190: INFO: Pod "pod-configmaps-d421eda8-0528-4de5-a065-af55f440baeb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022626952s STEP: Saw pod success Apr 28 00:33:20.190: INFO: Pod "pod-configmaps-d421eda8-0528-4de5-a065-af55f440baeb" satisfied condition "Succeeded or Failed" Apr 28 00:33:20.193: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-d421eda8-0528-4de5-a065-af55f440baeb container env-test: STEP: delete the pod Apr 28 00:33:20.251: INFO: Waiting for pod pod-configmaps-d421eda8-0528-4de5-a065-af55f440baeb to disappear Apr 28 00:33:20.338: INFO: Pod pod-configmaps-d421eda8-0528-4de5-a065-af55f440baeb no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:33:20.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1061" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":123,"skipped":2129,"failed":0} S ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:33:20.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 28 00:33:20.419: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:33:28.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8490" for this suite. • [SLOW TEST:8.261 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":124,"skipped":2130,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:33:28.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 28 00:33:38.843: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7200 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 00:33:38.843: INFO: >>> kubeConfig: /root/.kube/config I0428 00:33:38.878086 7 log.go:172] (0xc002dd8580) (0xc002c41900) Create stream I0428 00:33:38.878117 7 log.go:172] (0xc002dd8580) (0xc002c41900) Stream added, broadcasting: 1 I0428 00:33:38.879922 7 log.go:172] (0xc002dd8580) Reply frame received for 1 I0428 00:33:38.879949 7 log.go:172] (0xc002dd8580) (0xc001d8a3c0) Create stream I0428 00:33:38.879958 7 log.go:172] (0xc002dd8580) (0xc001d8a3c0) Stream added, broadcasting: 3 I0428 00:33:38.880771 7 log.go:172] (0xc002dd8580) Reply frame received for 3 I0428 00:33:38.880791 7 log.go:172] (0xc002dd8580) (0xc002c419a0) Create stream I0428 00:33:38.880797 7 log.go:172] (0xc002dd8580) (0xc002c419a0) Stream added, broadcasting: 5 I0428 00:33:38.881888 7 log.go:172] (0xc002dd8580) Reply frame received for 5 I0428 00:33:38.979385 7 log.go:172] (0xc002dd8580) Data frame received for 5 I0428 00:33:38.979426 7 log.go:172] (0xc002c419a0) (5) Data frame handling I0428 00:33:38.979452 7 log.go:172] (0xc002dd8580) Data frame received for 3 I0428 00:33:38.979463 7 log.go:172] (0xc001d8a3c0) (3) Data frame handling I0428 00:33:38.979481 7 log.go:172] (0xc001d8a3c0) (3) Data frame sent I0428 00:33:38.979493 7 log.go:172] (0xc002dd8580) Data frame received for 3 I0428 00:33:38.979504 7 log.go:172] (0xc001d8a3c0) (3) Data frame handling I0428 00:33:38.980756 7 log.go:172] (0xc002dd8580) Data frame received for 1 I0428 00:33:38.980776 7 log.go:172] (0xc002c41900) (1) Data frame handling I0428 00:33:38.980785 7 log.go:172] (0xc002c41900) (1) Data frame sent I0428 00:33:38.980798 7 log.go:172] (0xc002dd8580) (0xc002c41900) Stream removed, broadcasting: 1 I0428 00:33:38.980813 7 log.go:172] (0xc002dd8580) Go away received I0428 00:33:38.980998 7 log.go:172] (0xc002dd8580) (0xc002c41900) Stream removed, broadcasting: 1 I0428 00:33:38.981027 7 log.go:172] (0xc002dd8580) (0xc001d8a3c0) Stream removed, broadcasting: 3 I0428 00:33:38.981036 7 log.go:172] (0xc002dd8580) (0xc002c419a0) Stream removed, broadcasting: 5 Apr 28 00:33:38.981: INFO: Exec stderr: "" Apr 28 00:33:38.981: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7200 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 00:33:38.981: INFO: >>> kubeConfig: /root/.kube/config I0428 00:33:39.004088 7 log.go:172] (0xc0020d4160) (0xc0028cc780) Create stream I0428 00:33:39.004113 7 log.go:172] (0xc0020d4160) (0xc0028cc780) Stream added, broadcasting: 1 I0428 00:33:39.006021 7 log.go:172] (0xc0020d4160) Reply frame received for 1 I0428 00:33:39.006062 7 log.go:172] (0xc0020d4160) (0xc001e78000) Create stream I0428 00:33:39.006074 7 log.go:172] (0xc0020d4160) (0xc001e78000) Stream added, broadcasting: 3 I0428 00:33:39.006892 7 log.go:172] (0xc0020d4160) Reply frame received for 3 I0428 00:33:39.006934 7 log.go:172] (0xc0020d4160) (0xc0028cc8c0) Create stream I0428 00:33:39.006963 7 log.go:172] (0xc0020d4160) (0xc0028cc8c0) Stream added, broadcasting: 5 I0428 00:33:39.007909 7 log.go:172] (0xc0020d4160) Reply frame received for 5 I0428 00:33:39.068098 7 log.go:172] (0xc0020d4160) Data frame received for 3 I0428 00:33:39.068149 7 log.go:172] (0xc001e78000) (3) Data frame handling I0428 00:33:39.068169 7 log.go:172] (0xc001e78000) (3) Data frame sent I0428 00:33:39.068187 7 log.go:172] (0xc0020d4160) Data frame received for 3 I0428 00:33:39.068198 7 log.go:172] (0xc001e78000) (3) Data frame handling I0428 00:33:39.068248 7 log.go:172] (0xc0020d4160) Data frame received for 5 I0428 00:33:39.068295 7 log.go:172] (0xc0028cc8c0) (5) Data frame handling I0428 00:33:39.069648 7 log.go:172] (0xc0020d4160) Data frame received for 1 I0428 00:33:39.069689 7 log.go:172] (0xc0028cc780) (1) Data frame handling I0428 00:33:39.069714 7 log.go:172] (0xc0028cc780) (1) Data frame sent I0428 00:33:39.069733 7 log.go:172] (0xc0020d4160) (0xc0028cc780) Stream removed, broadcasting: 1 I0428 00:33:39.069858 7 log.go:172] (0xc0020d4160) (0xc0028cc780) Stream removed, broadcasting: 1 I0428 00:33:39.069899 7 log.go:172] (0xc0020d4160) Go away received I0428 00:33:39.069931 7 log.go:172] (0xc0020d4160) (0xc001e78000) Stream removed, broadcasting: 3 I0428 00:33:39.069951 7 log.go:172] (0xc0020d4160) (0xc0028cc8c0) Stream removed, broadcasting: 5 Apr 28 00:33:39.069: INFO: Exec stderr: "" Apr 28 00:33:39.069: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7200 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 00:33:39.070: INFO: >>> kubeConfig: /root/.kube/config I0428 00:33:39.097035 7 log.go:172] (0xc002dd8c60) (0xc002c41ae0) Create stream I0428 00:33:39.097072 7 log.go:172] (0xc002dd8c60) (0xc002c41ae0) Stream added, broadcasting: 1 I0428 00:33:39.099361 7 log.go:172] (0xc002dd8c60) Reply frame received for 1 I0428 00:33:39.099397 7 log.go:172] (0xc002dd8c60) (0xc002858500) Create stream I0428 00:33:39.099408 7 log.go:172] (0xc002dd8c60) (0xc002858500) Stream added, broadcasting: 3 I0428 00:33:39.100600 7 log.go:172] (0xc002dd8c60) Reply frame received for 3 I0428 00:33:39.100638 7 log.go:172] (0xc002dd8c60) (0xc001e78140) Create stream I0428 00:33:39.100650 7 log.go:172] (0xc002dd8c60) (0xc001e78140) Stream added, broadcasting: 5 I0428 00:33:39.101700 7 log.go:172] (0xc002dd8c60) Reply frame received for 5 I0428 00:33:39.168511 7 log.go:172] (0xc002dd8c60) Data frame received for 3 I0428 00:33:39.168562 7 log.go:172] (0xc002858500) (3) Data frame handling I0428 00:33:39.168589 7 log.go:172] (0xc002858500) (3) Data frame sent I0428 00:33:39.168610 7 log.go:172] (0xc002dd8c60) Data frame received for 3 I0428 00:33:39.168628 7 log.go:172] (0xc002858500) (3) Data frame handling I0428 00:33:39.168661 7 log.go:172] (0xc002dd8c60) Data frame received for 5 I0428 00:33:39.168687 7 log.go:172] (0xc001e78140) (5) Data frame handling I0428 00:33:39.170122 7 log.go:172] (0xc002dd8c60) Data frame received for 1 I0428 00:33:39.170142 7 log.go:172] (0xc002c41ae0) (1) Data frame handling I0428 00:33:39.170152 7 log.go:172] (0xc002c41ae0) (1) Data frame sent I0428 00:33:39.170166 7 log.go:172] (0xc002dd8c60) (0xc002c41ae0) Stream removed, broadcasting: 1 I0428 00:33:39.170179 7 log.go:172] (0xc002dd8c60) Go away received I0428 00:33:39.170240 7 log.go:172] (0xc002dd8c60) (0xc002c41ae0) Stream removed, broadcasting: 1 I0428 00:33:39.170264 7 log.go:172] (0xc002dd8c60) (0xc002858500) Stream removed, broadcasting: 3 I0428 00:33:39.170275 7 log.go:172] (0xc002dd8c60) (0xc001e78140) Stream removed, broadcasting: 5 Apr 28 00:33:39.170: INFO: Exec stderr: "" Apr 28 00:33:39.170: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7200 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 00:33:39.170: INFO: >>> kubeConfig: /root/.kube/config I0428 00:33:39.200600 7 log.go:172] (0xc0023dd550) (0xc001d8a8c0) Create stream I0428 00:33:39.200629 7 log.go:172] (0xc0023dd550) (0xc001d8a8c0) Stream added, broadcasting: 1 I0428 00:33:39.202745 7 log.go:172] (0xc0023dd550) Reply frame received for 1 I0428 00:33:39.202805 7 log.go:172] (0xc0023dd550) (0xc0028ccf00) Create stream I0428 00:33:39.202827 7 log.go:172] (0xc0023dd550) (0xc0028ccf00) Stream added, broadcasting: 3 I0428 00:33:39.203925 7 log.go:172] (0xc0023dd550) Reply frame received for 3 I0428 00:33:39.204009 7 log.go:172] (0xc0023dd550) (0xc002c41d60) Create stream I0428 00:33:39.204042 7 log.go:172] (0xc0023dd550) (0xc002c41d60) Stream added, broadcasting: 5 I0428 00:33:39.205296 7 log.go:172] (0xc0023dd550) Reply frame received for 5 I0428 00:33:39.265404 7 log.go:172] (0xc0023dd550) Data frame received for 5 I0428 00:33:39.265466 7 log.go:172] (0xc002c41d60) (5) Data frame handling I0428 00:33:39.265503 7 log.go:172] (0xc0023dd550) Data frame received for 3 I0428 00:33:39.265522 7 log.go:172] (0xc0028ccf00) (3) Data frame handling I0428 00:33:39.265553 7 log.go:172] (0xc0028ccf00) (3) Data frame sent I0428 00:33:39.265571 7 log.go:172] (0xc0023dd550) Data frame received for 3 I0428 00:33:39.265586 7 log.go:172] (0xc0028ccf00) (3) Data frame handling I0428 00:33:39.267310 7 log.go:172] (0xc0023dd550) Data frame received for 1 I0428 00:33:39.267340 7 log.go:172] (0xc001d8a8c0) (1) Data frame handling I0428 00:33:39.267360 7 log.go:172] (0xc001d8a8c0) (1) Data frame sent I0428 00:33:39.267379 7 log.go:172] (0xc0023dd550) (0xc001d8a8c0) Stream removed, broadcasting: 1 I0428 00:33:39.267406 7 log.go:172] (0xc0023dd550) Go away received I0428 00:33:39.267533 7 log.go:172] (0xc0023dd550) (0xc001d8a8c0) Stream removed, broadcasting: 1 I0428 00:33:39.267561 7 log.go:172] (0xc0023dd550) (0xc0028ccf00) Stream removed, broadcasting: 3 I0428 00:33:39.267574 7 log.go:172] (0xc0023dd550) (0xc002c41d60) Stream removed, broadcasting: 5 Apr 28 00:33:39.267: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 28 00:33:39.267: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7200 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 00:33:39.267: INFO: >>> kubeConfig: /root/.kube/config I0428 00:33:39.301721 7 log.go:172] (0xc002dd9290) (0xc001f081e0) Create stream I0428 00:33:39.301751 7 log.go:172] (0xc002dd9290) (0xc001f081e0) Stream added, broadcasting: 1 I0428 00:33:39.304165 7 log.go:172] (0xc002dd9290) Reply frame received for 1 I0428 00:33:39.304190 7 log.go:172] (0xc002dd9290) (0xc0028588c0) Create stream I0428 00:33:39.304202 7 log.go:172] (0xc002dd9290) (0xc0028588c0) Stream added, broadcasting: 3 I0428 00:33:39.305546 7 log.go:172] (0xc002dd9290) Reply frame received for 3 I0428 00:33:39.305590 7 log.go:172] (0xc002dd9290) (0xc0028cd220) Create stream I0428 00:33:39.305605 7 log.go:172] (0xc002dd9290) (0xc0028cd220) Stream added, broadcasting: 5 I0428 00:33:39.306563 7 log.go:172] (0xc002dd9290) Reply frame received for 5 I0428 00:33:39.365807 7 log.go:172] (0xc002dd9290) Data frame received for 3 I0428 00:33:39.365854 7 log.go:172] (0xc0028588c0) (3) Data frame handling I0428 00:33:39.365887 7 log.go:172] (0xc0028588c0) (3) Data frame sent I0428 00:33:39.365915 7 log.go:172] (0xc002dd9290) Data frame received for 3 I0428 00:33:39.365938 7 log.go:172] (0xc0028588c0) (3) Data frame handling I0428 00:33:39.366113 7 log.go:172] (0xc002dd9290) Data frame received for 5 I0428 00:33:39.366146 7 log.go:172] (0xc0028cd220) (5) Data frame handling I0428 00:33:39.367643 7 log.go:172] (0xc002dd9290) Data frame received for 1 I0428 00:33:39.367662 7 log.go:172] (0xc001f081e0) (1) Data frame handling I0428 00:33:39.367671 7 log.go:172] (0xc001f081e0) (1) Data frame sent I0428 00:33:39.367679 7 log.go:172] (0xc002dd9290) (0xc001f081e0) Stream removed, broadcasting: 1 I0428 00:33:39.367702 7 log.go:172] (0xc002dd9290) Go away received I0428 00:33:39.367805 7 log.go:172] (0xc002dd9290) (0xc001f081e0) Stream removed, broadcasting: 1 I0428 00:33:39.367828 7 log.go:172] (0xc002dd9290) (0xc0028588c0) Stream removed, broadcasting: 3 I0428 00:33:39.367838 7 log.go:172] (0xc002dd9290) (0xc0028cd220) Stream removed, broadcasting: 5 Apr 28 00:33:39.367: INFO: Exec stderr: "" Apr 28 00:33:39.367: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7200 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 00:33:39.367: INFO: >>> kubeConfig: /root/.kube/config I0428 00:33:39.403338 7 log.go:172] (0xc002dd98c0) (0xc001f08640) Create stream I0428 00:33:39.403384 7 log.go:172] (0xc002dd98c0) (0xc001f08640) Stream added, broadcasting: 1 I0428 00:33:39.405747 7 log.go:172] (0xc002dd98c0) Reply frame received for 1 I0428 00:33:39.405786 7 log.go:172] (0xc002dd98c0) (0xc001f08780) Create stream I0428 00:33:39.405801 7 log.go:172] (0xc002dd98c0) (0xc001f08780) Stream added, broadcasting: 3 I0428 00:33:39.406830 7 log.go:172] (0xc002dd98c0) Reply frame received for 3 I0428 00:33:39.406893 7 log.go:172] (0xc002dd98c0) (0xc001e781e0) Create stream I0428 00:33:39.406933 7 log.go:172] (0xc002dd98c0) (0xc001e781e0) Stream added, broadcasting: 5 I0428 00:33:39.407983 7 log.go:172] (0xc002dd98c0) Reply frame received for 5 I0428 00:33:39.465208 7 log.go:172] (0xc002dd98c0) Data frame received for 5 I0428 00:33:39.465240 7 log.go:172] (0xc001e781e0) (5) Data frame handling I0428 00:33:39.465274 7 log.go:172] (0xc002dd98c0) Data frame received for 3 I0428 00:33:39.465307 7 log.go:172] (0xc001f08780) (3) Data frame handling I0428 00:33:39.465330 7 log.go:172] (0xc001f08780) (3) Data frame sent I0428 00:33:39.465343 7 log.go:172] (0xc002dd98c0) Data frame received for 3 I0428 00:33:39.465354 7 log.go:172] (0xc001f08780) (3) Data frame handling I0428 00:33:39.466518 7 log.go:172] (0xc002dd98c0) Data frame received for 1 I0428 00:33:39.466543 7 log.go:172] (0xc001f08640) (1) Data frame handling I0428 00:33:39.466578 7 log.go:172] (0xc001f08640) (1) Data frame sent I0428 00:33:39.466592 7 log.go:172] (0xc002dd98c0) (0xc001f08640) Stream removed, broadcasting: 1 I0428 00:33:39.466606 7 log.go:172] (0xc002dd98c0) Go away received I0428 00:33:39.466745 7 log.go:172] (0xc002dd98c0) (0xc001f08640) Stream removed, broadcasting: 1 I0428 00:33:39.466787 7 log.go:172] (0xc002dd98c0) (0xc001f08780) Stream removed, broadcasting: 3 I0428 00:33:39.466812 7 log.go:172] (0xc002dd98c0) (0xc001e781e0) Stream removed, broadcasting: 5 Apr 28 00:33:39.466: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 28 00:33:39.466: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7200 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 00:33:39.466: INFO: >>> kubeConfig: /root/.kube/config I0428 00:33:39.494436 7 log.go:172] (0xc002dd9ef0) (0xc001f08be0) Create stream I0428 00:33:39.494456 7 log.go:172] (0xc002dd9ef0) (0xc001f08be0) Stream added, broadcasting: 1 I0428 00:33:39.496296 7 log.go:172] (0xc002dd9ef0) Reply frame received for 1 I0428 00:33:39.496350 7 log.go:172] (0xc002dd9ef0) (0xc001e78280) Create stream I0428 00:33:39.496369 7 log.go:172] (0xc002dd9ef0) (0xc001e78280) Stream added, broadcasting: 3 I0428 00:33:39.497628 7 log.go:172] (0xc002dd9ef0) Reply frame received for 3 I0428 00:33:39.497662 7 log.go:172] (0xc002dd9ef0) (0xc0028cd2c0) Create stream I0428 00:33:39.497685 7 log.go:172] (0xc002dd9ef0) (0xc0028cd2c0) Stream added, broadcasting: 5 I0428 00:33:39.498880 7 log.go:172] (0xc002dd9ef0) Reply frame received for 5 I0428 00:33:39.566112 7 log.go:172] (0xc002dd9ef0) Data frame received for 5 I0428 00:33:39.566187 7 log.go:172] (0xc0028cd2c0) (5) Data frame handling I0428 00:33:39.566225 7 log.go:172] (0xc002dd9ef0) Data frame received for 3 I0428 00:33:39.566246 7 log.go:172] (0xc001e78280) (3) Data frame handling I0428 00:33:39.566268 7 log.go:172] (0xc001e78280) (3) Data frame sent I0428 00:33:39.566288 7 log.go:172] (0xc002dd9ef0) Data frame received for 3 I0428 00:33:39.566304 7 log.go:172] (0xc001e78280) (3) Data frame handling I0428 00:33:39.567807 7 log.go:172] (0xc002dd9ef0) Data frame received for 1 I0428 00:33:39.567830 7 log.go:172] (0xc001f08be0) (1) Data frame handling I0428 00:33:39.567842 7 log.go:172] (0xc001f08be0) (1) Data frame sent I0428 00:33:39.567859 7 log.go:172] (0xc002dd9ef0) (0xc001f08be0) Stream removed, broadcasting: 1 I0428 00:33:39.567887 7 log.go:172] (0xc002dd9ef0) Go away received I0428 00:33:39.568022 7 log.go:172] (0xc002dd9ef0) (0xc001f08be0) Stream removed, broadcasting: 1 I0428 00:33:39.568052 7 log.go:172] (0xc002dd9ef0) (0xc001e78280) Stream removed, broadcasting: 3 I0428 00:33:39.568063 7 log.go:172] (0xc002dd9ef0) (0xc0028cd2c0) Stream removed, broadcasting: 5 Apr 28 00:33:39.568: INFO: Exec stderr: "" Apr 28 00:33:39.568: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7200 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 00:33:39.568: INFO: >>> kubeConfig: /root/.kube/config I0428 00:33:39.603271 7 log.go:172] (0xc00233cbb0) (0xc002858c80) Create stream I0428 00:33:39.603316 7 log.go:172] (0xc00233cbb0) (0xc002858c80) Stream added, broadcasting: 1 I0428 00:33:39.605451 7 log.go:172] (0xc00233cbb0) Reply frame received for 1 I0428 00:33:39.605476 7 log.go:172] (0xc00233cbb0) (0xc001f08dc0) Create stream I0428 00:33:39.605487 7 log.go:172] (0xc00233cbb0) (0xc001f08dc0) Stream added, broadcasting: 3 I0428 00:33:39.606279 7 log.go:172] (0xc00233cbb0) Reply frame received for 3 I0428 00:33:39.606314 7 log.go:172] (0xc00233cbb0) (0xc001e783c0) Create stream I0428 00:33:39.606325 7 log.go:172] (0xc00233cbb0) (0xc001e783c0) Stream added, broadcasting: 5 I0428 00:33:39.607049 7 log.go:172] (0xc00233cbb0) Reply frame received for 5 I0428 00:33:39.669851 7 log.go:172] (0xc00233cbb0) Data frame received for 5 I0428 00:33:39.669891 7 log.go:172] (0xc001e783c0) (5) Data frame handling I0428 00:33:39.669919 7 log.go:172] (0xc00233cbb0) Data frame received for 3 I0428 00:33:39.669933 7 log.go:172] (0xc001f08dc0) (3) Data frame handling I0428 00:33:39.669949 7 log.go:172] (0xc001f08dc0) (3) Data frame sent I0428 00:33:39.669960 7 log.go:172] (0xc00233cbb0) Data frame received for 3 I0428 00:33:39.669970 7 log.go:172] (0xc001f08dc0) (3) Data frame handling I0428 00:33:39.671516 7 log.go:172] (0xc00233cbb0) Data frame received for 1 I0428 00:33:39.671539 7 log.go:172] (0xc002858c80) (1) Data frame handling I0428 00:33:39.671562 7 log.go:172] (0xc002858c80) (1) Data frame sent I0428 00:33:39.671579 7 log.go:172] (0xc00233cbb0) (0xc002858c80) Stream removed, broadcasting: 1 I0428 00:33:39.671642 7 log.go:172] (0xc00233cbb0) Go away received I0428 00:33:39.671736 7 log.go:172] (0xc00233cbb0) (0xc002858c80) Stream removed, broadcasting: 1 I0428 00:33:39.671814 7 log.go:172] (0xc00233cbb0) (0xc001f08dc0) Stream removed, broadcasting: 3 I0428 00:33:39.671844 7 log.go:172] (0xc00233cbb0) (0xc001e783c0) Stream removed, broadcasting: 5 Apr 28 00:33:39.671: INFO: Exec stderr: "" Apr 28 00:33:39.671: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7200 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 00:33:39.671: INFO: >>> kubeConfig: /root/.kube/config I0428 00:33:39.705356 7 log.go:172] (0xc0068524d0) (0xc001f08fa0) Create stream I0428 00:33:39.705429 7 log.go:172] (0xc0068524d0) (0xc001f08fa0) Stream added, broadcasting: 1 I0428 00:33:39.707654 7 log.go:172] (0xc0068524d0) Reply frame received for 1 I0428 00:33:39.707716 7 log.go:172] (0xc0068524d0) (0xc001d8a960) Create stream I0428 00:33:39.707737 7 log.go:172] (0xc0068524d0) (0xc001d8a960) Stream added, broadcasting: 3 I0428 00:33:39.709044 7 log.go:172] (0xc0068524d0) Reply frame received for 3 I0428 00:33:39.709087 7 log.go:172] (0xc0068524d0) (0xc001d8aaa0) Create stream I0428 00:33:39.709102 7 log.go:172] (0xc0068524d0) (0xc001d8aaa0) Stream added, broadcasting: 5 I0428 00:33:39.710233 7 log.go:172] (0xc0068524d0) Reply frame received for 5 I0428 00:33:39.773913 7 log.go:172] (0xc0068524d0) Data frame received for 3 I0428 00:33:39.773948 7 log.go:172] (0xc001d8a960) (3) Data frame handling I0428 00:33:39.773961 7 log.go:172] (0xc001d8a960) (3) Data frame sent I0428 00:33:39.773972 7 log.go:172] (0xc0068524d0) Data frame received for 3 I0428 00:33:39.773995 7 log.go:172] (0xc001d8a960) (3) Data frame handling I0428 00:33:39.774034 7 log.go:172] (0xc0068524d0) Data frame received for 5 I0428 00:33:39.774053 7 log.go:172] (0xc001d8aaa0) (5) Data frame handling I0428 00:33:39.775834 7 log.go:172] (0xc0068524d0) Data frame received for 1 I0428 00:33:39.775894 7 log.go:172] (0xc001f08fa0) (1) Data frame handling I0428 00:33:39.775926 7 log.go:172] (0xc001f08fa0) (1) Data frame sent I0428 00:33:39.775975 7 log.go:172] (0xc0068524d0) (0xc001f08fa0) Stream removed, broadcasting: 1 I0428 00:33:39.776001 7 log.go:172] (0xc0068524d0) Go away received I0428 00:33:39.776129 7 log.go:172] (0xc0068524d0) (0xc001f08fa0) Stream removed, broadcasting: 1 I0428 00:33:39.776145 7 log.go:172] (0xc0068524d0) (0xc001d8a960) Stream removed, broadcasting: 3 I0428 00:33:39.776152 7 log.go:172] (0xc0068524d0) (0xc001d8aaa0) Stream removed, broadcasting: 5 Apr 28 00:33:39.776: INFO: Exec stderr: "" Apr 28 00:33:39.776: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7200 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 00:33:39.776: INFO: >>> kubeConfig: /root/.kube/config I0428 00:33:39.813894 7 log.go:172] (0xc0020d4a50) (0xc0028cd680) Create stream I0428 00:33:39.813924 7 log.go:172] (0xc0020d4a50) (0xc0028cd680) Stream added, broadcasting: 1 I0428 00:33:39.815798 7 log.go:172] (0xc0020d4a50) Reply frame received for 1 I0428 00:33:39.815839 7 log.go:172] (0xc0020d4a50) (0xc001f09040) Create stream I0428 00:33:39.815856 7 log.go:172] (0xc0020d4a50) (0xc001f09040) Stream added, broadcasting: 3 I0428 00:33:39.816806 7 log.go:172] (0xc0020d4a50) Reply frame received for 3 I0428 00:33:39.816832 7 log.go:172] (0xc0020d4a50) (0xc001f090e0) Create stream I0428 00:33:39.816847 7 log.go:172] (0xc0020d4a50) (0xc001f090e0) Stream added, broadcasting: 5 I0428 00:33:39.817970 7 log.go:172] (0xc0020d4a50) Reply frame received for 5 I0428 00:33:39.883539 7 log.go:172] (0xc0020d4a50) Data frame received for 5 I0428 00:33:39.883591 7 log.go:172] (0xc001f090e0) (5) Data frame handling I0428 00:33:39.883622 7 log.go:172] (0xc0020d4a50) Data frame received for 3 I0428 00:33:39.883634 7 log.go:172] (0xc001f09040) (3) Data frame handling I0428 00:33:39.883648 7 log.go:172] (0xc001f09040) (3) Data frame sent I0428 00:33:39.883660 7 log.go:172] (0xc0020d4a50) Data frame received for 3 I0428 00:33:39.883671 7 log.go:172] (0xc001f09040) (3) Data frame handling I0428 00:33:39.885630 7 log.go:172] (0xc0020d4a50) Data frame received for 1 I0428 00:33:39.885643 7 log.go:172] (0xc0028cd680) (1) Data frame handling I0428 00:33:39.885660 7 log.go:172] (0xc0028cd680) (1) Data frame sent I0428 00:33:39.885676 7 log.go:172] (0xc0020d4a50) (0xc0028cd680) Stream removed, broadcasting: 1 I0428 00:33:39.885785 7 log.go:172] (0xc0020d4a50) (0xc0028cd680) Stream removed, broadcasting: 1 I0428 00:33:39.885809 7 log.go:172] (0xc0020d4a50) (0xc001f09040) Stream removed, broadcasting: 3 I0428 00:33:39.885837 7 log.go:172] (0xc0020d4a50) Go away received I0428 00:33:39.885890 7 log.go:172] (0xc0020d4a50) (0xc001f090e0) Stream removed, broadcasting: 5 Apr 28 00:33:39.885: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:33:39.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-7200" for this suite. • [SLOW TEST:11.261 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":125,"skipped":2151,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:33:39.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 28 00:33:40.484: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 28 00:33:43.272: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630820, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630820, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630820, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630820, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 28 00:33:45.276: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630820, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630820, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630820, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630820, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 28 00:33:48.310: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:33:48.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3072" for this suite. STEP: Destroying namespace "webhook-3072-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.571 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":126,"skipped":2158,"failed":0} SSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:33:48.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:33:48.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-1842" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":127,"skipped":2162,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:33:48.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 28 00:33:49.556: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 28 00:33:51.598: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630829, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630829, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630829, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630829, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 28 00:33:54.686: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:33:54.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5586" for this suite. STEP: Destroying namespace "webhook-5586-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.320 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":128,"skipped":2190,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:33:54.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Apr 28 00:33:59.029: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-6629 PodName:pod-sharedvolume-319fcb00-50cf-4393-b7ce-b1de96a2246f ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 00:33:59.029: INFO: >>> kubeConfig: /root/.kube/config I0428 00:33:59.063982 7 log.go:172] (0xc0023dcd10) (0xc001fb80a0) Create stream I0428 00:33:59.064013 7 log.go:172] (0xc0023dcd10) (0xc001fb80a0) Stream added, broadcasting: 1 I0428 00:33:59.066719 7 log.go:172] (0xc0023dcd10) Reply frame received for 1 I0428 00:33:59.066767 7 log.go:172] (0xc0023dcd10) (0xc001fb81e0) Create stream I0428 00:33:59.066782 7 log.go:172] (0xc0023dcd10) (0xc001fb81e0) Stream added, broadcasting: 3 I0428 00:33:59.067813 7 log.go:172] (0xc0023dcd10) Reply frame received for 3 I0428 00:33:59.067858 7 log.go:172] (0xc0023dcd10) (0xc001d8ac80) Create stream I0428 00:33:59.067877 7 log.go:172] (0xc0023dcd10) (0xc001d8ac80) Stream added, broadcasting: 5 I0428 00:33:59.069024 7 log.go:172] (0xc0023dcd10) Reply frame received for 5 I0428 00:33:59.133262 7 log.go:172] (0xc0023dcd10) Data frame received for 5 I0428 00:33:59.133307 7 log.go:172] (0xc001d8ac80) (5) Data frame handling I0428 00:33:59.133360 7 log.go:172] (0xc0023dcd10) Data frame received for 3 I0428 00:33:59.133374 7 log.go:172] (0xc001fb81e0) (3) Data frame handling I0428 00:33:59.133385 7 log.go:172] (0xc001fb81e0) (3) Data frame sent I0428 00:33:59.133392 7 log.go:172] (0xc0023dcd10) Data frame received for 3 I0428 00:33:59.133397 7 log.go:172] (0xc001fb81e0) (3) Data frame handling I0428 00:33:59.135372 7 log.go:172] (0xc0023dcd10) Data frame received for 1 I0428 00:33:59.135401 7 log.go:172] (0xc001fb80a0) (1) Data frame handling I0428 00:33:59.135432 7 log.go:172] (0xc001fb80a0) (1) Data frame sent I0428 00:33:59.135455 7 log.go:172] (0xc0023dcd10) (0xc001fb80a0) Stream removed, broadcasting: 1 I0428 00:33:59.135506 7 log.go:172] (0xc0023dcd10) Go away received I0428 00:33:59.135547 7 log.go:172] (0xc0023dcd10) (0xc001fb80a0) Stream removed, broadcasting: 1 I0428 00:33:59.135581 7 log.go:172] (0xc0023dcd10) (0xc001fb81e0) Stream removed, broadcasting: 3 I0428 00:33:59.135625 7 log.go:172] (0xc0023dcd10) (0xc001d8ac80) Stream removed, broadcasting: 5 Apr 28 00:33:59.135: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:33:59.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6629" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":129,"skipped":2224,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:33:59.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:34:03.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3378" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":130,"skipped":2245,"failed":0} SSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:34:03.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 00:34:03.346: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 28 00:34:08.359: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 28 00:34:08.359: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 28 00:34:10.363: INFO: Creating deployment "test-rollover-deployment" Apr 28 00:34:10.371: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 28 00:34:12.378: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 28 00:34:12.384: INFO: Ensure that both replica sets have 1 created replica Apr 28 00:34:12.390: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 28 00:34:12.397: INFO: Updating deployment test-rollover-deployment Apr 28 00:34:12.397: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 28 00:34:14.408: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 28 00:34:14.417: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 28 00:34:14.420: INFO: all replica sets need to contain the pod-template-hash label Apr 28 00:34:14.421: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630850, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630850, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630852, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630850, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 28 00:34:16.461: INFO: all replica sets need to contain the pod-template-hash label Apr 28 00:34:16.461: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630850, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630850, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630855, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630850, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 28 00:34:18.430: INFO: all replica sets need to contain the pod-template-hash label Apr 28 00:34:18.430: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630850, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630850, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630855, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630850, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 28 00:34:20.427: INFO: all replica sets need to contain the pod-template-hash label Apr 28 00:34:20.427: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630850, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630850, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630855, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630850, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 28 00:34:22.427: INFO: all replica sets need to contain the pod-template-hash label Apr 28 00:34:22.427: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630850, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630850, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630855, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630850, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 28 00:34:24.428: INFO: all replica sets need to contain the pod-template-hash label Apr 28 00:34:24.428: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630850, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630850, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630855, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723630850, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 28 00:34:26.428: INFO: Apr 28 00:34:26.428: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 28 00:34:26.437: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-6860 /apis/apps/v1/namespaces/deployment-6860/deployments/test-rollover-deployment 99fec15d-94e9-4754-ba30-d3d8fef24c7f 11587827 2 2020-04-28 00:34:10 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00398da18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-28 00:34:10 +0000 UTC,LastTransitionTime:2020-04-28 00:34:10 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-78df7bc796" has successfully progressed.,LastUpdateTime:2020-04-28 00:34:26 +0000 UTC,LastTransitionTime:2020-04-28 00:34:10 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 28 00:34:26.441: INFO: New ReplicaSet "test-rollover-deployment-78df7bc796" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-78df7bc796 deployment-6860 /apis/apps/v1/namespaces/deployment-6860/replicasets/test-rollover-deployment-78df7bc796 c8052942-651a-477c-a4b9-b29ada759901 11587816 2 2020-04-28 00:34:12 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 99fec15d-94e9-4754-ba30-d3d8fef24c7f 0xc003a98307 0xc003a98308}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78df7bc796,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003a98378 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 28 00:34:26.441: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 28 00:34:26.441: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-6860 /apis/apps/v1/namespaces/deployment-6860/replicasets/test-rollover-controller 949d3cc5-f284-4f57-a500-a90eb8946f6d 11587825 2 2020-04-28 00:34:03 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 99fec15d-94e9-4754-ba30-d3d8fef24c7f 0xc003a98227 0xc003a98228}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003a98298 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 28 00:34:26.441: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-6860 /apis/apps/v1/namespaces/deployment-6860/replicasets/test-rollover-deployment-f6c94f66c 40ddb4d0-d8d0-4425-9ec3-804ccaaebbaf 11587754 2 2020-04-28 00:34:10 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 99fec15d-94e9-4754-ba30-d3d8fef24c7f 0xc003a983e0 0xc003a983e1}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003a98458 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 28 00:34:26.445: INFO: Pod "test-rollover-deployment-78df7bc796-pmjnt" is available: &Pod{ObjectMeta:{test-rollover-deployment-78df7bc796-pmjnt test-rollover-deployment-78df7bc796- deployment-6860 /api/v1/namespaces/deployment-6860/pods/test-rollover-deployment-78df7bc796-pmjnt bb38b103-b4d3-45be-af84-a14d226a1f1e 11587774 0 2020-04-28 00:34:12 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [{apps/v1 ReplicaSet test-rollover-deployment-78df7bc796 c8052942-651a-477c-a4b9-b29ada759901 0xc003a98a27 0xc003a98a28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jc5nz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jc5nz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jc5nz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 00:34:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 00:34:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 00:34:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 00:34:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.123,StartTime:2020-04-28 00:34:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-28 00:34:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://cff9d2103c447febbec6f4c8005feba06613e304d447e588628dcae0e84d9c81,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.123,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:34:26.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6860" for this suite. • [SLOW TEST:23.203 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":131,"skipped":2250,"failed":0} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:34:26.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 00:34:26.532: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 28 00:34:26.550: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:34:26.552: INFO: Number of nodes with available pods: 0 Apr 28 00:34:26.552: INFO: Node latest-worker is running more than one daemon pod Apr 28 00:34:27.557: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:34:27.560: INFO: Number of nodes with available pods: 0 Apr 28 00:34:27.560: INFO: Node latest-worker is running more than one daemon pod Apr 28 00:34:28.558: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:34:28.561: INFO: Number of nodes with available pods: 0 Apr 28 00:34:28.561: INFO: Node latest-worker is running more than one daemon pod Apr 28 00:34:29.557: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:34:29.561: INFO: Number of nodes with available pods: 0 Apr 28 00:34:29.561: INFO: Node latest-worker is running more than one daemon pod Apr 28 00:34:30.557: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:34:30.560: INFO: Number of nodes with available pods: 1 Apr 28 00:34:30.560: INFO: Node latest-worker is running more than one daemon pod Apr 28 00:34:31.564: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:34:31.569: INFO: Number of nodes with available pods: 2 Apr 28 00:34:31.569: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 28 00:34:31.780: INFO: Wrong image for pod: daemon-set-js62p. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 28 00:34:31.780: INFO: Wrong image for pod: daemon-set-n49b6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 28 00:34:31.954: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:34:32.989: INFO: Wrong image for pod: daemon-set-js62p. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 28 00:34:32.989: INFO: Wrong image for pod: daemon-set-n49b6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 28 00:34:32.993: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:34:33.958: INFO: Wrong image for pod: daemon-set-js62p. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 28 00:34:33.958: INFO: Wrong image for pod: daemon-set-n49b6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 28 00:34:33.958: INFO: Pod daemon-set-n49b6 is not available Apr 28 00:34:33.961: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:34:34.959: INFO: Wrong image for pod: daemon-set-js62p. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 28 00:34:34.959: INFO: Wrong image for pod: daemon-set-n49b6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 28 00:34:34.959: INFO: Pod daemon-set-n49b6 is not available Apr 28 00:34:34.963: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:34:35.959: INFO: Wrong image for pod: daemon-set-js62p. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 28 00:34:35.959: INFO: Wrong image for pod: daemon-set-n49b6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 28 00:34:35.959: INFO: Pod daemon-set-n49b6 is not available Apr 28 00:34:35.963: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:34:36.958: INFO: Wrong image for pod: daemon-set-js62p. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 28 00:34:36.958: INFO: Wrong image for pod: daemon-set-n49b6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 28 00:34:36.958: INFO: Pod daemon-set-n49b6 is not available Apr 28 00:34:36.961: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:34:37.959: INFO: Wrong image for pod: daemon-set-js62p. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 28 00:34:37.959: INFO: Wrong image for pod: daemon-set-n49b6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 28 00:34:37.959: INFO: Pod daemon-set-n49b6 is not available Apr 28 00:34:37.963: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:34:38.959: INFO: Wrong image for pod: daemon-set-js62p. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 28 00:34:38.960: INFO: Wrong image for pod: daemon-set-n49b6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 28 00:34:38.960: INFO: Pod daemon-set-n49b6 is not available Apr 28 00:34:38.966: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:34:39.958: INFO: Wrong image for pod: daemon-set-js62p. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 28 00:34:39.958: INFO: Wrong image for pod: daemon-set-n49b6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 28 00:34:39.958: INFO: Pod daemon-set-n49b6 is not available Apr 28 00:34:39.962: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:34:40.958: INFO: Wrong image for pod: daemon-set-js62p. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 28 00:34:40.958: INFO: Wrong image for pod: daemon-set-n49b6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 28 00:34:40.958: INFO: Pod daemon-set-n49b6 is not available Apr 28 00:34:40.962: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:34:41.957: INFO: Wrong image for pod: daemon-set-js62p. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 28 00:34:41.957: INFO: Wrong image for pod: daemon-set-n49b6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 28 00:34:41.957: INFO: Pod daemon-set-n49b6 is not available Apr 28 00:34:41.960: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:34:42.960: INFO: Wrong image for pod: daemon-set-js62p. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 28 00:34:42.960: INFO: Wrong image for pod: daemon-set-n49b6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 28 00:34:42.960: INFO: Pod daemon-set-n49b6 is not available Apr 28 00:34:42.964: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:34:43.958: INFO: Wrong image for pod: daemon-set-js62p. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 28 00:34:43.958: INFO: Pod daemon-set-tv8xk is not available Apr 28 00:34:43.961: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:34:44.959: INFO: Wrong image for pod: daemon-set-js62p. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 28 00:34:44.959: INFO: Pod daemon-set-tv8xk is not available Apr 28 00:34:44.962: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:34:45.976: INFO: Wrong image for pod: daemon-set-js62p. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 28 00:34:45.989: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:34:46.959: INFO: Wrong image for pod: daemon-set-js62p. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 28 00:34:46.963: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:34:47.958: INFO: Wrong image for pod: daemon-set-js62p. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 28 00:34:47.958: INFO: Pod daemon-set-js62p is not available Apr 28 00:34:47.963: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:34:48.959: INFO: Wrong image for pod: daemon-set-js62p. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 28 00:34:48.959: INFO: Pod daemon-set-js62p is not available Apr 28 00:34:48.963: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:34:49.959: INFO: Wrong image for pod: daemon-set-js62p. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 28 00:34:49.959: INFO: Pod daemon-set-js62p is not available Apr 28 00:34:49.963: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:34:50.959: INFO: Wrong image for pod: daemon-set-js62p. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 28 00:34:50.959: INFO: Pod daemon-set-js62p is not available Apr 28 00:34:50.963: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:34:51.959: INFO: Wrong image for pod: daemon-set-js62p. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 28 00:34:51.959: INFO: Pod daemon-set-js62p is not available Apr 28 00:34:51.962: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:34:52.959: INFO: Pod daemon-set-7pvg2 is not available Apr 28 00:34:52.963: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 28 00:34:52.967: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:34:52.969: INFO: Number of nodes with available pods: 1 Apr 28 00:34:52.969: INFO: Node latest-worker is running more than one daemon pod Apr 28 00:34:53.975: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:34:53.979: INFO: Number of nodes with available pods: 1 Apr 28 00:34:53.979: INFO: Node latest-worker is running more than one daemon pod Apr 28 00:34:54.974: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:34:54.991: INFO: Number of nodes with available pods: 1 Apr 28 00:34:54.992: INFO: Node latest-worker is running more than one daemon pod Apr 28 00:34:55.975: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 00:34:55.978: INFO: Number of nodes with available pods: 2 Apr 28 00:34:55.978: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-619, will wait for the garbage collector to delete the pods Apr 28 00:34:56.066: INFO: Deleting DaemonSet.extensions daemon-set took: 21.68904ms Apr 28 00:34:56.366: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.211508ms Apr 28 00:35:02.779: INFO: Number of nodes with available pods: 0 Apr 28 00:35:02.779: INFO: Number of running nodes: 0, number of available pods: 0 Apr 28 00:35:02.782: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-619/daemonsets","resourceVersion":"11588048"},"items":null} Apr 28 00:35:02.784: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-619/pods","resourceVersion":"11588048"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:35:02.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-619" for this suite. • [SLOW TEST:36.374 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":132,"skipped":2254,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:35:02.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-7488 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Apr 28 00:35:02.893: INFO: Found 0 stateful pods, waiting for 3 Apr 28 00:35:12.897: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 28 00:35:12.897: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 28 00:35:12.897: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Apr 28 00:35:22.898: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 28 00:35:22.898: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 28 00:35:22.898: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 28 00:35:22.909: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7488 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 28 00:35:23.169: INFO: stderr: "I0428 00:35:23.052301 2117 log.go:172] (0xc000904a50) (0xc0006f0140) Create stream\nI0428 00:35:23.052364 2117 log.go:172] (0xc000904a50) (0xc0006f0140) Stream added, broadcasting: 1\nI0428 00:35:23.055387 2117 log.go:172] (0xc000904a50) Reply frame received for 1\nI0428 00:35:23.055444 2117 log.go:172] (0xc000904a50) (0xc0006f01e0) Create stream\nI0428 00:35:23.055458 2117 log.go:172] (0xc000904a50) (0xc0006f01e0) Stream added, broadcasting: 3\nI0428 00:35:23.056389 2117 log.go:172] (0xc000904a50) Reply frame received for 3\nI0428 00:35:23.056434 2117 log.go:172] (0xc000904a50) (0xc0006f0280) Create stream\nI0428 00:35:23.056451 2117 log.go:172] (0xc000904a50) (0xc0006f0280) Stream added, broadcasting: 5\nI0428 00:35:23.057726 2117 log.go:172] (0xc000904a50) Reply frame received for 5\nI0428 00:35:23.132176 2117 log.go:172] (0xc000904a50) Data frame received for 5\nI0428 00:35:23.132204 2117 log.go:172] (0xc0006f0280) (5) Data frame handling\nI0428 00:35:23.132222 2117 log.go:172] (0xc0006f0280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0428 00:35:23.161843 2117 log.go:172] (0xc000904a50) Data frame received for 5\nI0428 00:35:23.161903 2117 log.go:172] (0xc000904a50) Data frame received for 3\nI0428 00:35:23.161937 2117 log.go:172] (0xc0006f01e0) (3) Data frame handling\nI0428 00:35:23.161964 2117 log.go:172] (0xc0006f0280) (5) Data frame handling\nI0428 00:35:23.162001 2117 log.go:172] (0xc0006f01e0) (3) Data frame sent\nI0428 00:35:23.162037 2117 log.go:172] (0xc000904a50) Data frame received for 3\nI0428 00:35:23.162071 2117 log.go:172] (0xc0006f01e0) (3) Data frame handling\nI0428 00:35:23.163706 2117 log.go:172] (0xc000904a50) Data frame received for 1\nI0428 00:35:23.163729 2117 log.go:172] (0xc0006f0140) (1) Data frame handling\nI0428 00:35:23.163748 2117 log.go:172] (0xc0006f0140) (1) Data frame sent\nI0428 00:35:23.163817 2117 log.go:172] (0xc000904a50) (0xc0006f0140) Stream removed, broadcasting: 1\nI0428 00:35:23.164081 2117 log.go:172] (0xc000904a50) Go away received\nI0428 00:35:23.164107 2117 log.go:172] (0xc000904a50) (0xc0006f0140) Stream removed, broadcasting: 1\nI0428 00:35:23.164124 2117 log.go:172] (0xc000904a50) (0xc0006f01e0) Stream removed, broadcasting: 3\nI0428 00:35:23.164137 2117 log.go:172] (0xc000904a50) (0xc0006f0280) Stream removed, broadcasting: 5\n" Apr 28 00:35:23.169: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 28 00:35:23.169: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 28 00:35:33.205: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Apr 28 00:35:43.255: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7488 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 28 00:35:43.495: INFO: stderr: "I0428 00:35:43.406747 2138 log.go:172] (0xc000b7e0b0) (0xc0006c7e00) Create stream\nI0428 00:35:43.406806 2138 log.go:172] (0xc000b7e0b0) (0xc0006c7e00) Stream added, broadcasting: 1\nI0428 00:35:43.409094 2138 log.go:172] (0xc000b7e0b0) Reply frame received for 1\nI0428 00:35:43.409226 2138 log.go:172] (0xc000b7e0b0) (0xc00068c0a0) Create stream\nI0428 00:35:43.409237 2138 log.go:172] (0xc000b7e0b0) (0xc00068c0a0) Stream added, broadcasting: 3\nI0428 00:35:43.410545 2138 log.go:172] (0xc000b7e0b0) Reply frame received for 3\nI0428 00:35:43.410592 2138 log.go:172] (0xc000b7e0b0) (0xc0006c7ea0) Create stream\nI0428 00:35:43.410603 2138 log.go:172] (0xc000b7e0b0) (0xc0006c7ea0) Stream added, broadcasting: 5\nI0428 00:35:43.411808 2138 log.go:172] (0xc000b7e0b0) Reply frame received for 5\nI0428 00:35:43.487645 2138 log.go:172] (0xc000b7e0b0) Data frame received for 5\nI0428 00:35:43.487683 2138 log.go:172] (0xc0006c7ea0) (5) Data frame handling\nI0428 00:35:43.487698 2138 log.go:172] (0xc0006c7ea0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0428 00:35:43.487724 2138 log.go:172] (0xc000b7e0b0) Data frame received for 5\nI0428 00:35:43.487740 2138 log.go:172] (0xc0006c7ea0) (5) Data frame handling\nI0428 00:35:43.487765 2138 log.go:172] (0xc000b7e0b0) Data frame received for 3\nI0428 00:35:43.487776 2138 log.go:172] (0xc00068c0a0) (3) Data frame handling\nI0428 00:35:43.487786 2138 log.go:172] (0xc00068c0a0) (3) Data frame sent\nI0428 00:35:43.487860 2138 log.go:172] (0xc000b7e0b0) Data frame received for 3\nI0428 00:35:43.487890 2138 log.go:172] (0xc00068c0a0) (3) Data frame handling\nI0428 00:35:43.489872 2138 log.go:172] (0xc000b7e0b0) Data frame received for 1\nI0428 00:35:43.489899 2138 log.go:172] (0xc0006c7e00) (1) Data frame handling\nI0428 00:35:43.489930 2138 log.go:172] (0xc0006c7e00) (1) Data frame sent\nI0428 00:35:43.489953 2138 log.go:172] (0xc000b7e0b0) (0xc0006c7e00) Stream removed, broadcasting: 1\nI0428 00:35:43.489995 2138 log.go:172] (0xc000b7e0b0) Go away received\nI0428 00:35:43.490403 2138 log.go:172] (0xc000b7e0b0) (0xc0006c7e00) Stream removed, broadcasting: 1\nI0428 00:35:43.490428 2138 log.go:172] (0xc000b7e0b0) (0xc00068c0a0) Stream removed, broadcasting: 3\nI0428 00:35:43.490441 2138 log.go:172] (0xc000b7e0b0) (0xc0006c7ea0) Stream removed, broadcasting: 5\n" Apr 28 00:35:43.495: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 28 00:35:43.495: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 28 00:35:53.529: INFO: Waiting for StatefulSet statefulset-7488/ss2 to complete update Apr 28 00:35:53.529: INFO: Waiting for Pod statefulset-7488/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 28 00:35:53.529: INFO: Waiting for Pod statefulset-7488/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 28 00:36:03.542: INFO: Waiting for StatefulSet statefulset-7488/ss2 to complete update Apr 28 00:36:03.542: INFO: Waiting for Pod statefulset-7488/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Apr 28 00:36:13.536: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7488 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 28 00:36:13.794: INFO: stderr: "I0428 00:36:13.671376 2161 log.go:172] (0xc000a31600) (0xc000a288c0) Create stream\nI0428 00:36:13.671437 2161 log.go:172] (0xc000a31600) (0xc000a288c0) Stream added, broadcasting: 1\nI0428 00:36:13.677024 2161 log.go:172] (0xc000a31600) Reply frame received for 1\nI0428 00:36:13.677072 2161 log.go:172] (0xc000a31600) (0xc0007ef680) Create stream\nI0428 00:36:13.677103 2161 log.go:172] (0xc000a31600) (0xc0007ef680) Stream added, broadcasting: 3\nI0428 00:36:13.678385 2161 log.go:172] (0xc000a31600) Reply frame received for 3\nI0428 00:36:13.678441 2161 log.go:172] (0xc000a31600) (0xc00060eaa0) Create stream\nI0428 00:36:13.678456 2161 log.go:172] (0xc000a31600) (0xc00060eaa0) Stream added, broadcasting: 5\nI0428 00:36:13.679428 2161 log.go:172] (0xc000a31600) Reply frame received for 5\nI0428 00:36:13.757026 2161 log.go:172] (0xc000a31600) Data frame received for 5\nI0428 00:36:13.757056 2161 log.go:172] (0xc00060eaa0) (5) Data frame handling\nI0428 00:36:13.757075 2161 log.go:172] (0xc00060eaa0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0428 00:36:13.786592 2161 log.go:172] (0xc000a31600) Data frame received for 3\nI0428 00:36:13.786647 2161 log.go:172] (0xc0007ef680) (3) Data frame handling\nI0428 00:36:13.786683 2161 log.go:172] (0xc0007ef680) (3) Data frame sent\nI0428 00:36:13.786705 2161 log.go:172] (0xc000a31600) Data frame received for 3\nI0428 00:36:13.786722 2161 log.go:172] (0xc0007ef680) (3) Data frame handling\nI0428 00:36:13.786745 2161 log.go:172] (0xc000a31600) Data frame received for 5\nI0428 00:36:13.786775 2161 log.go:172] (0xc00060eaa0) (5) Data frame handling\nI0428 00:36:13.788694 2161 log.go:172] (0xc000a31600) Data frame received for 1\nI0428 00:36:13.788725 2161 log.go:172] (0xc000a288c0) (1) Data frame handling\nI0428 00:36:13.788751 2161 log.go:172] (0xc000a288c0) (1) Data frame sent\nI0428 00:36:13.788775 2161 log.go:172] (0xc000a31600) (0xc000a288c0) Stream removed, broadcasting: 1\nI0428 00:36:13.788808 2161 log.go:172] (0xc000a31600) Go away received\nI0428 00:36:13.789302 2161 log.go:172] (0xc000a31600) (0xc000a288c0) Stream removed, broadcasting: 1\nI0428 00:36:13.789330 2161 log.go:172] (0xc000a31600) (0xc0007ef680) Stream removed, broadcasting: 3\nI0428 00:36:13.789342 2161 log.go:172] (0xc000a31600) (0xc00060eaa0) Stream removed, broadcasting: 5\n" Apr 28 00:36:13.795: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 28 00:36:13.795: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 28 00:36:23.834: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Apr 28 00:36:33.883: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7488 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 28 00:36:34.124: INFO: stderr: "I0428 00:36:34.020183 2183 log.go:172] (0xc000916630) (0xc000440c80) Create stream\nI0428 00:36:34.020253 2183 log.go:172] (0xc000916630) (0xc000440c80) Stream added, broadcasting: 1\nI0428 00:36:34.022829 2183 log.go:172] (0xc000916630) Reply frame received for 1\nI0428 00:36:34.022892 2183 log.go:172] (0xc000916630) (0xc0006a7400) Create stream\nI0428 00:36:34.022912 2183 log.go:172] (0xc000916630) (0xc0006a7400) Stream added, broadcasting: 3\nI0428 00:36:34.023831 2183 log.go:172] (0xc000916630) Reply frame received for 3\nI0428 00:36:34.023864 2183 log.go:172] (0xc000916630) (0xc000b1c000) Create stream\nI0428 00:36:34.023876 2183 log.go:172] (0xc000916630) (0xc000b1c000) Stream added, broadcasting: 5\nI0428 00:36:34.024761 2183 log.go:172] (0xc000916630) Reply frame received for 5\nI0428 00:36:34.117961 2183 log.go:172] (0xc000916630) Data frame received for 5\nI0428 00:36:34.117994 2183 log.go:172] (0xc000b1c000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0428 00:36:34.118029 2183 log.go:172] (0xc000916630) Data frame received for 3\nI0428 00:36:34.118077 2183 log.go:172] (0xc0006a7400) (3) Data frame handling\nI0428 00:36:34.118096 2183 log.go:172] (0xc0006a7400) (3) Data frame sent\nI0428 00:36:34.118107 2183 log.go:172] (0xc000916630) Data frame received for 3\nI0428 00:36:34.118122 2183 log.go:172] (0xc0006a7400) (3) Data frame handling\nI0428 00:36:34.118168 2183 log.go:172] (0xc000b1c000) (5) Data frame sent\nI0428 00:36:34.118205 2183 log.go:172] (0xc000916630) Data frame received for 5\nI0428 00:36:34.118221 2183 log.go:172] (0xc000b1c000) (5) Data frame handling\nI0428 00:36:34.119963 2183 log.go:172] (0xc000916630) Data frame received for 1\nI0428 00:36:34.119998 2183 log.go:172] (0xc000440c80) (1) Data frame handling\nI0428 00:36:34.120016 2183 log.go:172] (0xc000440c80) (1) Data frame sent\nI0428 00:36:34.120044 2183 log.go:172] (0xc000916630) (0xc000440c80) Stream removed, broadcasting: 1\nI0428 00:36:34.120084 2183 log.go:172] (0xc000916630) Go away received\nI0428 00:36:34.120494 2183 log.go:172] (0xc000916630) (0xc000440c80) Stream removed, broadcasting: 1\nI0428 00:36:34.120525 2183 log.go:172] (0xc000916630) (0xc0006a7400) Stream removed, broadcasting: 3\nI0428 00:36:34.120538 2183 log.go:172] (0xc000916630) (0xc000b1c000) Stream removed, broadcasting: 5\n" Apr 28 00:36:34.124: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 28 00:36:34.124: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 28 00:36:54.150: INFO: Deleting all statefulset in ns statefulset-7488 Apr 28 00:36:54.153: INFO: Scaling statefulset ss2 to 0 Apr 28 00:37:14.184: INFO: Waiting for statefulset status.replicas updated to 0 Apr 28 00:37:14.187: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:37:14.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7488" for this suite. • [SLOW TEST:131.381 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":133,"skipped":2273,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:37:14.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:37:14.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9128" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":134,"skipped":2286,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:37:14.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-a655109c-8916-4290-afad-edb022a1f65b STEP: Creating a pod to test consume configMaps Apr 28 00:37:14.451: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2b489195-7c06-408a-bcea-4a5218f548b3" in namespace "projected-5562" to be "Succeeded or Failed" Apr 28 00:37:14.522: INFO: Pod "pod-projected-configmaps-2b489195-7c06-408a-bcea-4a5218f548b3": Phase="Pending", Reason="", readiness=false. Elapsed: 71.430232ms Apr 28 00:37:16.527: INFO: Pod "pod-projected-configmaps-2b489195-7c06-408a-bcea-4a5218f548b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075629558s Apr 28 00:37:18.531: INFO: Pod "pod-projected-configmaps-2b489195-7c06-408a-bcea-4a5218f548b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079831434s STEP: Saw pod success Apr 28 00:37:18.531: INFO: Pod "pod-projected-configmaps-2b489195-7c06-408a-bcea-4a5218f548b3" satisfied condition "Succeeded or Failed" Apr 28 00:37:18.534: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-2b489195-7c06-408a-bcea-4a5218f548b3 container projected-configmap-volume-test: STEP: delete the pod Apr 28 00:37:18.561: INFO: Waiting for pod pod-projected-configmaps-2b489195-7c06-408a-bcea-4a5218f548b3 to disappear Apr 28 00:37:18.566: INFO: Pod pod-projected-configmaps-2b489195-7c06-408a-bcea-4a5218f548b3 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:37:18.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5562" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":135,"skipped":2318,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:37:18.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-a83a484c-7991-4e55-914a-bc511bfbff99 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-a83a484c-7991-4e55-914a-bc511bfbff99 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:37:26.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8267" for this suite. • [SLOW TEST:8.187 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":136,"skipped":2336,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:37:26.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 28 00:37:26.887: INFO: Waiting up to 5m0s for pod "pod-ff87f9d5-6d85-42bd-81b7-614ed4e3fab4" in namespace "emptydir-4979" to be "Succeeded or Failed" Apr 28 00:37:26.936: INFO: Pod "pod-ff87f9d5-6d85-42bd-81b7-614ed4e3fab4": Phase="Pending", Reason="", readiness=false. Elapsed: 48.390677ms Apr 28 00:37:28.940: INFO: Pod "pod-ff87f9d5-6d85-42bd-81b7-614ed4e3fab4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052045543s Apr 28 00:37:30.944: INFO: Pod "pod-ff87f9d5-6d85-42bd-81b7-614ed4e3fab4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056498542s STEP: Saw pod success Apr 28 00:37:30.944: INFO: Pod "pod-ff87f9d5-6d85-42bd-81b7-614ed4e3fab4" satisfied condition "Succeeded or Failed" Apr 28 00:37:30.948: INFO: Trying to get logs from node latest-worker2 pod pod-ff87f9d5-6d85-42bd-81b7-614ed4e3fab4 container test-container: STEP: delete the pod Apr 28 00:37:30.991: INFO: Waiting for pod pod-ff87f9d5-6d85-42bd-81b7-614ed4e3fab4 to disappear Apr 28 00:37:30.998: INFO: Pod pod-ff87f9d5-6d85-42bd-81b7-614ed4e3fab4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:37:30.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4979" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":137,"skipped":2337,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:37:31.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7893.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7893.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 28 00:37:37.152: INFO: DNS probes using dns-7893/dns-test-5cb9b6b9-182e-4865-99f2-4ae82641ce0a succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:37:37.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7893" for this suite. • [SLOW TEST:6.274 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":275,"completed":138,"skipped":2344,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:37:37.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 28 00:37:37.440: INFO: Waiting up to 5m0s for pod "downward-api-a71cb6a5-b6de-4d86-bc2e-fc37f5f84e7a" in namespace "downward-api-8207" to be "Succeeded or Failed" Apr 28 00:37:37.644: INFO: Pod "downward-api-a71cb6a5-b6de-4d86-bc2e-fc37f5f84e7a": Phase="Pending", Reason="", readiness=false. Elapsed: 204.169476ms Apr 28 00:37:39.648: INFO: Pod "downward-api-a71cb6a5-b6de-4d86-bc2e-fc37f5f84e7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208245397s Apr 28 00:37:41.653: INFO: Pod "downward-api-a71cb6a5-b6de-4d86-bc2e-fc37f5f84e7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.212622542s STEP: Saw pod success Apr 28 00:37:41.653: INFO: Pod "downward-api-a71cb6a5-b6de-4d86-bc2e-fc37f5f84e7a" satisfied condition "Succeeded or Failed" Apr 28 00:37:41.656: INFO: Trying to get logs from node latest-worker2 pod downward-api-a71cb6a5-b6de-4d86-bc2e-fc37f5f84e7a container dapi-container: STEP: delete the pod Apr 28 00:37:41.683: INFO: Waiting for pod downward-api-a71cb6a5-b6de-4d86-bc2e-fc37f5f84e7a to disappear Apr 28 00:37:41.742: INFO: Pod downward-api-a71cb6a5-b6de-4d86-bc2e-fc37f5f84e7a no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:37:41.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8207" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":139,"skipped":2383,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:37:41.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-4352 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-4352 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4352 Apr 28 00:37:41.832: INFO: Found 0 stateful pods, waiting for 1 Apr 28 00:37:51.836: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 28 00:37:51.840: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4352 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 28 00:37:54.971: INFO: stderr: "I0428 00:37:54.873666 2206 log.go:172] (0xc000b36210) (0xc0002f2500) Create stream\nI0428 00:37:54.873702 2206 log.go:172] (0xc000b36210) (0xc0002f2500) Stream added, broadcasting: 1\nI0428 00:37:54.876419 2206 log.go:172] (0xc000b36210) Reply frame received for 1\nI0428 00:37:54.876465 2206 log.go:172] (0xc000b36210) (0xc0002f25a0) Create stream\nI0428 00:37:54.876481 2206 log.go:172] (0xc000b36210) (0xc0002f25a0) Stream added, broadcasting: 3\nI0428 00:37:54.877509 2206 log.go:172] (0xc000b36210) Reply frame received for 3\nI0428 00:37:54.877553 2206 log.go:172] (0xc000b36210) (0xc00075c000) Create stream\nI0428 00:37:54.877567 2206 log.go:172] (0xc000b36210) (0xc00075c000) Stream added, broadcasting: 5\nI0428 00:37:54.878539 2206 log.go:172] (0xc000b36210) Reply frame received for 5\nI0428 00:37:54.925674 2206 log.go:172] (0xc000b36210) Data frame received for 5\nI0428 00:37:54.925702 2206 log.go:172] (0xc00075c000) (5) Data frame handling\nI0428 00:37:54.925723 2206 log.go:172] (0xc00075c000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0428 00:37:54.964286 2206 log.go:172] (0xc000b36210) Data frame received for 3\nI0428 00:37:54.964333 2206 log.go:172] (0xc0002f25a0) (3) Data frame handling\nI0428 00:37:54.964351 2206 log.go:172] (0xc0002f25a0) (3) Data frame sent\nI0428 00:37:54.964364 2206 log.go:172] (0xc000b36210) Data frame received for 3\nI0428 00:37:54.964374 2206 log.go:172] (0xc0002f25a0) (3) Data frame handling\nI0428 00:37:54.964393 2206 log.go:172] (0xc000b36210) Data frame received for 5\nI0428 00:37:54.964406 2206 log.go:172] (0xc00075c000) (5) Data frame handling\nI0428 00:37:54.966259 2206 log.go:172] (0xc000b36210) Data frame received for 1\nI0428 00:37:54.966272 2206 log.go:172] (0xc0002f2500) (1) Data frame handling\nI0428 00:37:54.966282 2206 log.go:172] (0xc0002f2500) (1) Data frame sent\nI0428 00:37:54.966292 2206 log.go:172] (0xc000b36210) (0xc0002f2500) Stream removed, broadcasting: 1\nI0428 00:37:54.966514 2206 log.go:172] (0xc000b36210) (0xc0002f2500) Stream removed, broadcasting: 1\nI0428 00:37:54.966529 2206 log.go:172] (0xc000b36210) (0xc0002f25a0) Stream removed, broadcasting: 3\nI0428 00:37:54.966538 2206 log.go:172] (0xc000b36210) (0xc00075c000) Stream removed, broadcasting: 5\n" Apr 28 00:37:54.971: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 28 00:37:54.971: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 28 00:37:54.975: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 28 00:38:04.979: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 28 00:38:04.979: INFO: Waiting for statefulset status.replicas updated to 0 Apr 28 00:38:05.008: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999609s Apr 28 00:38:06.012: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.98013863s Apr 28 00:38:07.017: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.975587191s Apr 28 00:38:08.022: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.971093091s Apr 28 00:38:09.031: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.966299947s Apr 28 00:38:10.035: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.956839577s Apr 28 00:38:11.041: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.952474879s Apr 28 00:38:12.045: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.946718264s Apr 28 00:38:13.066: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.942980721s Apr 28 00:38:14.070: INFO: Verifying statefulset ss doesn't scale past 1 for another 921.820191ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4352 Apr 28 00:38:15.073: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4352 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 28 00:38:15.293: INFO: stderr: "I0428 00:38:15.221072 2236 log.go:172] (0xc000a47600) (0xc000a1e640) Create stream\nI0428 00:38:15.221252 2236 log.go:172] (0xc000a47600) (0xc000a1e640) Stream added, broadcasting: 1\nI0428 00:38:15.224269 2236 log.go:172] (0xc000a47600) Reply frame received for 1\nI0428 00:38:15.224309 2236 log.go:172] (0xc000a47600) (0xc000940500) Create stream\nI0428 00:38:15.224322 2236 log.go:172] (0xc000a47600) (0xc000940500) Stream added, broadcasting: 3\nI0428 00:38:15.225719 2236 log.go:172] (0xc000a47600) Reply frame received for 3\nI0428 00:38:15.225754 2236 log.go:172] (0xc000a47600) (0xc0009da320) Create stream\nI0428 00:38:15.225765 2236 log.go:172] (0xc000a47600) (0xc0009da320) Stream added, broadcasting: 5\nI0428 00:38:15.226700 2236 log.go:172] (0xc000a47600) Reply frame received for 5\nI0428 00:38:15.288090 2236 log.go:172] (0xc000a47600) Data frame received for 5\nI0428 00:38:15.288128 2236 log.go:172] (0xc0009da320) (5) Data frame handling\nI0428 00:38:15.288137 2236 log.go:172] (0xc0009da320) (5) Data frame sent\nI0428 00:38:15.288142 2236 log.go:172] (0xc000a47600) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0428 00:38:15.288170 2236 log.go:172] (0xc000a47600) Data frame received for 3\nI0428 00:38:15.288227 2236 log.go:172] (0xc000940500) (3) Data frame handling\nI0428 00:38:15.288257 2236 log.go:172] (0xc000940500) (3) Data frame sent\nI0428 00:38:15.288280 2236 log.go:172] (0xc000a47600) Data frame received for 3\nI0428 00:38:15.288296 2236 log.go:172] (0xc000940500) (3) Data frame handling\nI0428 00:38:15.288313 2236 log.go:172] (0xc0009da320) (5) Data frame handling\nI0428 00:38:15.290067 2236 log.go:172] (0xc000a47600) Data frame received for 1\nI0428 00:38:15.290081 2236 log.go:172] (0xc000a1e640) (1) Data frame handling\nI0428 00:38:15.290088 2236 log.go:172] (0xc000a1e640) (1) Data frame sent\nI0428 00:38:15.290096 2236 log.go:172] (0xc000a47600) (0xc000a1e640) Stream removed, broadcasting: 1\nI0428 00:38:15.290109 2236 log.go:172] (0xc000a47600) Go away received\nI0428 00:38:15.290422 2236 log.go:172] (0xc000a47600) (0xc000a1e640) Stream removed, broadcasting: 1\nI0428 00:38:15.290436 2236 log.go:172] (0xc000a47600) (0xc000940500) Stream removed, broadcasting: 3\nI0428 00:38:15.290441 2236 log.go:172] (0xc000a47600) (0xc0009da320) Stream removed, broadcasting: 5\n" Apr 28 00:38:15.294: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 28 00:38:15.294: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 28 00:38:15.297: INFO: Found 1 stateful pods, waiting for 3 Apr 28 00:38:25.302: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 28 00:38:25.302: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 28 00:38:25.302: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 28 00:38:25.309: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4352 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 28 00:38:25.515: INFO: stderr: "I0428 00:38:25.457898 2255 log.go:172] (0xc0008aa790) (0xc00078e280) Create stream\nI0428 00:38:25.457973 2255 log.go:172] (0xc0008aa790) (0xc00078e280) Stream added, broadcasting: 1\nI0428 00:38:25.460450 2255 log.go:172] (0xc0008aa790) Reply frame received for 1\nI0428 00:38:25.460491 2255 log.go:172] (0xc0008aa790) (0xc000551400) Create stream\nI0428 00:38:25.460504 2255 log.go:172] (0xc0008aa790) (0xc000551400) Stream added, broadcasting: 3\nI0428 00:38:25.461615 2255 log.go:172] (0xc0008aa790) Reply frame received for 3\nI0428 00:38:25.461638 2255 log.go:172] (0xc0008aa790) (0xc00078e320) Create stream\nI0428 00:38:25.461644 2255 log.go:172] (0xc0008aa790) (0xc00078e320) Stream added, broadcasting: 5\nI0428 00:38:25.462337 2255 log.go:172] (0xc0008aa790) Reply frame received for 5\nI0428 00:38:25.508851 2255 log.go:172] (0xc0008aa790) Data frame received for 5\nI0428 00:38:25.508901 2255 log.go:172] (0xc00078e320) (5) Data frame handling\nI0428 00:38:25.508922 2255 log.go:172] (0xc00078e320) (5) Data frame sent\nI0428 00:38:25.508939 2255 log.go:172] (0xc0008aa790) Data frame received for 5\nI0428 00:38:25.508949 2255 log.go:172] (0xc00078e320) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0428 00:38:25.508963 2255 log.go:172] (0xc0008aa790) Data frame received for 3\nI0428 00:38:25.509048 2255 log.go:172] (0xc000551400) (3) Data frame handling\nI0428 00:38:25.509074 2255 log.go:172] (0xc000551400) (3) Data frame sent\nI0428 00:38:25.509087 2255 log.go:172] (0xc0008aa790) Data frame received for 3\nI0428 00:38:25.509105 2255 log.go:172] (0xc000551400) (3) Data frame handling\nI0428 00:38:25.510393 2255 log.go:172] (0xc0008aa790) Data frame received for 1\nI0428 00:38:25.510426 2255 log.go:172] (0xc00078e280) (1) Data frame handling\nI0428 00:38:25.510444 2255 log.go:172] (0xc00078e280) (1) Data frame sent\nI0428 00:38:25.510456 2255 log.go:172] (0xc0008aa790) (0xc00078e280) Stream removed, broadcasting: 1\nI0428 00:38:25.510467 2255 log.go:172] (0xc0008aa790) Go away received\nI0428 00:38:25.510825 2255 log.go:172] (0xc0008aa790) (0xc00078e280) Stream removed, broadcasting: 1\nI0428 00:38:25.510839 2255 log.go:172] (0xc0008aa790) (0xc000551400) Stream removed, broadcasting: 3\nI0428 00:38:25.510847 2255 log.go:172] (0xc0008aa790) (0xc00078e320) Stream removed, broadcasting: 5\n" Apr 28 00:38:25.515: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 28 00:38:25.515: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 28 00:38:25.515: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4352 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 28 00:38:25.761: INFO: stderr: "I0428 00:38:25.656742 2277 log.go:172] (0xc0003e2b00) (0xc000677360) Create stream\nI0428 00:38:25.656805 2277 log.go:172] (0xc0003e2b00) (0xc000677360) Stream added, broadcasting: 1\nI0428 00:38:25.659297 2277 log.go:172] (0xc0003e2b00) Reply frame received for 1\nI0428 00:38:25.659333 2277 log.go:172] (0xc0003e2b00) (0xc000956000) Create stream\nI0428 00:38:25.659344 2277 log.go:172] (0xc0003e2b00) (0xc000956000) Stream added, broadcasting: 3\nI0428 00:38:25.660293 2277 log.go:172] (0xc0003e2b00) Reply frame received for 3\nI0428 00:38:25.660361 2277 log.go:172] (0xc0003e2b00) (0xc00077c0a0) Create stream\nI0428 00:38:25.660385 2277 log.go:172] (0xc0003e2b00) (0xc00077c0a0) Stream added, broadcasting: 5\nI0428 00:38:25.661678 2277 log.go:172] (0xc0003e2b00) Reply frame received for 5\nI0428 00:38:25.727140 2277 log.go:172] (0xc0003e2b00) Data frame received for 5\nI0428 00:38:25.727167 2277 log.go:172] (0xc00077c0a0) (5) Data frame handling\nI0428 00:38:25.727183 2277 log.go:172] (0xc00077c0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0428 00:38:25.753846 2277 log.go:172] (0xc0003e2b00) Data frame received for 3\nI0428 00:38:25.753885 2277 log.go:172] (0xc000956000) (3) Data frame handling\nI0428 00:38:25.753905 2277 log.go:172] (0xc000956000) (3) Data frame sent\nI0428 00:38:25.753915 2277 log.go:172] (0xc0003e2b00) Data frame received for 3\nI0428 00:38:25.753922 2277 log.go:172] (0xc000956000) (3) Data frame handling\nI0428 00:38:25.754138 2277 log.go:172] (0xc0003e2b00) Data frame received for 5\nI0428 00:38:25.754154 2277 log.go:172] (0xc00077c0a0) (5) Data frame handling\nI0428 00:38:25.755847 2277 log.go:172] (0xc0003e2b00) Data frame received for 1\nI0428 00:38:25.755867 2277 log.go:172] (0xc000677360) (1) Data frame handling\nI0428 00:38:25.755877 2277 log.go:172] (0xc000677360) (1) Data frame sent\nI0428 00:38:25.755887 2277 log.go:172] (0xc0003e2b00) (0xc000677360) Stream removed, broadcasting: 1\nI0428 00:38:25.755910 2277 log.go:172] (0xc0003e2b00) Go away received\nI0428 00:38:25.756169 2277 log.go:172] (0xc0003e2b00) (0xc000677360) Stream removed, broadcasting: 1\nI0428 00:38:25.756185 2277 log.go:172] (0xc0003e2b00) (0xc000956000) Stream removed, broadcasting: 3\nI0428 00:38:25.756191 2277 log.go:172] (0xc0003e2b00) (0xc00077c0a0) Stream removed, broadcasting: 5\n" Apr 28 00:38:25.761: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 28 00:38:25.761: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 28 00:38:25.761: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4352 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 28 00:38:26.033: INFO: stderr: "I0428 00:38:25.891675 2298 log.go:172] (0xc00003a370) (0xc000516aa0) Create stream\nI0428 00:38:25.891744 2298 log.go:172] (0xc00003a370) (0xc000516aa0) Stream added, broadcasting: 1\nI0428 00:38:25.894685 2298 log.go:172] (0xc00003a370) Reply frame received for 1\nI0428 00:38:25.894724 2298 log.go:172] (0xc00003a370) (0xc000a08000) Create stream\nI0428 00:38:25.894736 2298 log.go:172] (0xc00003a370) (0xc000a08000) Stream added, broadcasting: 3\nI0428 00:38:25.895726 2298 log.go:172] (0xc00003a370) Reply frame received for 3\nI0428 00:38:25.895775 2298 log.go:172] (0xc00003a370) (0xc000a06000) Create stream\nI0428 00:38:25.895787 2298 log.go:172] (0xc00003a370) (0xc000a06000) Stream added, broadcasting: 5\nI0428 00:38:25.896805 2298 log.go:172] (0xc00003a370) Reply frame received for 5\nI0428 00:38:25.995444 2298 log.go:172] (0xc00003a370) Data frame received for 5\nI0428 00:38:25.995485 2298 log.go:172] (0xc000a06000) (5) Data frame handling\nI0428 00:38:25.995505 2298 log.go:172] (0xc000a06000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0428 00:38:26.025500 2298 log.go:172] (0xc00003a370) Data frame received for 3\nI0428 00:38:26.025538 2298 log.go:172] (0xc000a08000) (3) Data frame handling\nI0428 00:38:26.025573 2298 log.go:172] (0xc000a08000) (3) Data frame sent\nI0428 00:38:26.025767 2298 log.go:172] (0xc00003a370) Data frame received for 5\nI0428 00:38:26.025825 2298 log.go:172] (0xc000a06000) (5) Data frame handling\nI0428 00:38:26.025969 2298 log.go:172] (0xc00003a370) Data frame received for 3\nI0428 00:38:26.026068 2298 log.go:172] (0xc000a08000) (3) Data frame handling\nI0428 00:38:26.027879 2298 log.go:172] (0xc00003a370) Data frame received for 1\nI0428 00:38:26.027927 2298 log.go:172] (0xc000516aa0) (1) Data frame handling\nI0428 00:38:26.027960 2298 log.go:172] (0xc000516aa0) (1) Data frame sent\nI0428 00:38:26.027981 2298 log.go:172] (0xc00003a370) (0xc000516aa0) Stream removed, broadcasting: 1\nI0428 00:38:26.028010 2298 log.go:172] (0xc00003a370) Go away received\nI0428 00:38:26.028568 2298 log.go:172] (0xc00003a370) (0xc000516aa0) Stream removed, broadcasting: 1\nI0428 00:38:26.028586 2298 log.go:172] (0xc00003a370) (0xc000a08000) Stream removed, broadcasting: 3\nI0428 00:38:26.028597 2298 log.go:172] (0xc00003a370) (0xc000a06000) Stream removed, broadcasting: 5\n" Apr 28 00:38:26.033: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 28 00:38:26.033: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 28 00:38:26.033: INFO: Waiting for statefulset status.replicas updated to 0 Apr 28 00:38:26.037: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 28 00:38:36.045: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 28 00:38:36.045: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 28 00:38:36.045: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 28 00:38:36.059: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999623s Apr 28 00:38:37.063: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992727789s Apr 28 00:38:38.090: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.989167099s Apr 28 00:38:39.094: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.961812292s Apr 28 00:38:40.099: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.957375864s Apr 28 00:38:41.104: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.952682463s Apr 28 00:38:42.108: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.947771631s Apr 28 00:38:43.113: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.943571488s Apr 28 00:38:44.117: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.938867189s Apr 28 00:38:45.122: INFO: Verifying statefulset ss doesn't scale past 3 for another 934.443409ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4352 Apr 28 00:38:46.127: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4352 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 28 00:38:46.347: INFO: stderr: "I0428 00:38:46.266974 2319 log.go:172] (0xc00052fa20) (0xc0007d7540) Create stream\nI0428 00:38:46.267050 2319 log.go:172] (0xc00052fa20) (0xc0007d7540) Stream added, broadcasting: 1\nI0428 00:38:46.270194 2319 log.go:172] (0xc00052fa20) Reply frame received for 1\nI0428 00:38:46.270243 2319 log.go:172] (0xc00052fa20) (0xc00098a000) Create stream\nI0428 00:38:46.270261 2319 log.go:172] (0xc00052fa20) (0xc00098a000) Stream added, broadcasting: 3\nI0428 00:38:46.271355 2319 log.go:172] (0xc00052fa20) Reply frame received for 3\nI0428 00:38:46.271409 2319 log.go:172] (0xc00052fa20) (0xc0004e8000) Create stream\nI0428 00:38:46.271433 2319 log.go:172] (0xc00052fa20) (0xc0004e8000) Stream added, broadcasting: 5\nI0428 00:38:46.272536 2319 log.go:172] (0xc00052fa20) Reply frame received for 5\nI0428 00:38:46.338937 2319 log.go:172] (0xc00052fa20) Data frame received for 3\nI0428 00:38:46.338990 2319 log.go:172] (0xc00098a000) (3) Data frame handling\nI0428 00:38:46.339017 2319 log.go:172] (0xc00098a000) (3) Data frame sent\nI0428 00:38:46.339037 2319 log.go:172] (0xc00052fa20) Data frame received for 3\nI0428 00:38:46.339052 2319 log.go:172] (0xc00098a000) (3) Data frame handling\nI0428 00:38:46.339079 2319 log.go:172] (0xc00052fa20) Data frame received for 5\nI0428 00:38:46.339115 2319 log.go:172] (0xc0004e8000) (5) Data frame handling\nI0428 00:38:46.339135 2319 log.go:172] (0xc0004e8000) (5) Data frame sent\nI0428 00:38:46.339152 2319 log.go:172] (0xc00052fa20) Data frame received for 5\nI0428 00:38:46.339164 2319 log.go:172] (0xc0004e8000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0428 00:38:46.341018 2319 log.go:172] (0xc00052fa20) Data frame received for 1\nI0428 00:38:46.341444 2319 log.go:172] (0xc0007d7540) (1) Data frame handling\nI0428 00:38:46.341494 2319 log.go:172] (0xc0007d7540) (1) Data frame sent\nI0428 00:38:46.341523 2319 log.go:172] (0xc00052fa20) (0xc0007d7540) Stream removed, broadcasting: 1\nI0428 00:38:46.341556 2319 log.go:172] (0xc00052fa20) Go away received\nI0428 00:38:46.341846 2319 log.go:172] (0xc00052fa20) (0xc0007d7540) Stream removed, broadcasting: 1\nI0428 00:38:46.341862 2319 log.go:172] (0xc00052fa20) (0xc00098a000) Stream removed, broadcasting: 3\nI0428 00:38:46.341869 2319 log.go:172] (0xc00052fa20) (0xc0004e8000) Stream removed, broadcasting: 5\n" Apr 28 00:38:46.347: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 28 00:38:46.347: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 28 00:38:46.347: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4352 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 28 00:38:46.567: INFO: stderr: "I0428 00:38:46.479397 2341 log.go:172] (0xc000a546e0) (0xc0006b9360) Create stream\nI0428 00:38:46.479454 2341 log.go:172] (0xc000a546e0) (0xc0006b9360) Stream added, broadcasting: 1\nI0428 00:38:46.482168 2341 log.go:172] (0xc000a546e0) Reply frame received for 1\nI0428 00:38:46.482237 2341 log.go:172] (0xc000a546e0) (0xc000ac0000) Create stream\nI0428 00:38:46.482257 2341 log.go:172] (0xc000a546e0) (0xc000ac0000) Stream added, broadcasting: 3\nI0428 00:38:46.483314 2341 log.go:172] (0xc000a546e0) Reply frame received for 3\nI0428 00:38:46.483354 2341 log.go:172] (0xc000a546e0) (0xc000992000) Create stream\nI0428 00:38:46.483372 2341 log.go:172] (0xc000a546e0) (0xc000992000) Stream added, broadcasting: 5\nI0428 00:38:46.484266 2341 log.go:172] (0xc000a546e0) Reply frame received for 5\nI0428 00:38:46.561868 2341 log.go:172] (0xc000a546e0) Data frame received for 3\nI0428 00:38:46.561895 2341 log.go:172] (0xc000ac0000) (3) Data frame handling\nI0428 00:38:46.561913 2341 log.go:172] (0xc000ac0000) (3) Data frame sent\nI0428 00:38:46.561920 2341 log.go:172] (0xc000a546e0) Data frame received for 3\nI0428 00:38:46.561928 2341 log.go:172] (0xc000ac0000) (3) Data frame handling\nI0428 00:38:46.562073 2341 log.go:172] (0xc000a546e0) Data frame received for 5\nI0428 00:38:46.562098 2341 log.go:172] (0xc000992000) (5) Data frame handling\nI0428 00:38:46.562114 2341 log.go:172] (0xc000992000) (5) Data frame sent\nI0428 00:38:46.562122 2341 log.go:172] (0xc000a546e0) Data frame received for 5\nI0428 00:38:46.562129 2341 log.go:172] (0xc000992000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0428 00:38:46.563460 2341 log.go:172] (0xc000a546e0) Data frame received for 1\nI0428 00:38:46.563492 2341 log.go:172] (0xc0006b9360) (1) Data frame handling\nI0428 00:38:46.563505 2341 log.go:172] (0xc0006b9360) (1) Data frame sent\nI0428 00:38:46.563513 2341 log.go:172] (0xc000a546e0) (0xc0006b9360) Stream removed, broadcasting: 1\nI0428 00:38:46.563521 2341 log.go:172] (0xc000a546e0) Go away received\nI0428 00:38:46.563852 2341 log.go:172] (0xc000a546e0) (0xc0006b9360) Stream removed, broadcasting: 1\nI0428 00:38:46.563870 2341 log.go:172] (0xc000a546e0) (0xc000ac0000) Stream removed, broadcasting: 3\nI0428 00:38:46.563877 2341 log.go:172] (0xc000a546e0) (0xc000992000) Stream removed, broadcasting: 5\n" Apr 28 00:38:46.568: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 28 00:38:46.568: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 28 00:38:46.568: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4352 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 28 00:38:46.779: INFO: stderr: "I0428 00:38:46.703028 2361 log.go:172] (0xc0005fea50) (0xc00069d4a0) Create stream\nI0428 00:38:46.703089 2361 log.go:172] (0xc0005fea50) (0xc00069d4a0) Stream added, broadcasting: 1\nI0428 00:38:46.705959 2361 log.go:172] (0xc0005fea50) Reply frame received for 1\nI0428 00:38:46.706012 2361 log.go:172] (0xc0005fea50) (0xc000690000) Create stream\nI0428 00:38:46.706027 2361 log.go:172] (0xc0005fea50) (0xc000690000) Stream added, broadcasting: 3\nI0428 00:38:46.707270 2361 log.go:172] (0xc0005fea50) Reply frame received for 3\nI0428 00:38:46.707316 2361 log.go:172] (0xc0005fea50) (0xc00069d540) Create stream\nI0428 00:38:46.707330 2361 log.go:172] (0xc0005fea50) (0xc00069d540) Stream added, broadcasting: 5\nI0428 00:38:46.708305 2361 log.go:172] (0xc0005fea50) Reply frame received for 5\nI0428 00:38:46.773988 2361 log.go:172] (0xc0005fea50) Data frame received for 3\nI0428 00:38:46.774040 2361 log.go:172] (0xc000690000) (3) Data frame handling\nI0428 00:38:46.774053 2361 log.go:172] (0xc000690000) (3) Data frame sent\nI0428 00:38:46.774062 2361 log.go:172] (0xc0005fea50) Data frame received for 3\nI0428 00:38:46.774069 2361 log.go:172] (0xc000690000) (3) Data frame handling\nI0428 00:38:46.774080 2361 log.go:172] (0xc0005fea50) Data frame received for 5\nI0428 00:38:46.774087 2361 log.go:172] (0xc00069d540) (5) Data frame handling\nI0428 00:38:46.774096 2361 log.go:172] (0xc00069d540) (5) Data frame sent\nI0428 00:38:46.774105 2361 log.go:172] (0xc0005fea50) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0428 00:38:46.774112 2361 log.go:172] (0xc00069d540) (5) Data frame handling\nI0428 00:38:46.775354 2361 log.go:172] (0xc0005fea50) Data frame received for 1\nI0428 00:38:46.775372 2361 log.go:172] (0xc00069d4a0) (1) Data frame handling\nI0428 00:38:46.775386 2361 log.go:172] (0xc00069d4a0) (1) Data frame sent\nI0428 00:38:46.775398 2361 log.go:172] (0xc0005fea50) (0xc00069d4a0) Stream removed, broadcasting: 1\nI0428 00:38:46.775420 2361 log.go:172] (0xc0005fea50) Go away received\nI0428 00:38:46.775717 2361 log.go:172] (0xc0005fea50) (0xc00069d4a0) Stream removed, broadcasting: 1\nI0428 00:38:46.775731 2361 log.go:172] (0xc0005fea50) (0xc000690000) Stream removed, broadcasting: 3\nI0428 00:38:46.775737 2361 log.go:172] (0xc0005fea50) (0xc00069d540) Stream removed, broadcasting: 5\n" Apr 28 00:38:46.779: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 28 00:38:46.779: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 28 00:38:46.779: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 28 00:39:06.792: INFO: Deleting all statefulset in ns statefulset-4352 Apr 28 00:39:06.794: INFO: Scaling statefulset ss to 0 Apr 28 00:39:06.802: INFO: Waiting for statefulset status.replicas updated to 0 Apr 28 00:39:06.805: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:39:06.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4352" for this suite. • [SLOW TEST:85.069 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":140,"skipped":2393,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:39:06.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:39:17.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3263" for this suite. • [SLOW TEST:11.161 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":141,"skipped":2404,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:39:17.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Apr 28 00:39:18.069: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the sample API server. Apr 28 00:39:18.755: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 28 00:39:20.941: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723631158, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723631158, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723631158, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723631158, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 28 00:39:23.572: INFO: Waited 618.704457ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:39:24.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-5668" for this suite. • [SLOW TEST:6.325 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":142,"skipped":2414,"failed":0} SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:39:24.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 28 00:39:24.386: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 28 00:39:24.422: INFO: Waiting for terminating namespaces to be deleted... Apr 28 00:39:24.424: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 28 00:39:24.439: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 28 00:39:24.439: INFO: Container kindnet-cni ready: true, restart count 0 Apr 28 00:39:24.439: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 28 00:39:24.439: INFO: Container kube-proxy ready: true, restart count 0 Apr 28 00:39:24.439: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 28 00:39:24.454: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 28 00:39:24.454: INFO: Container kindnet-cni ready: true, restart count 0 Apr 28 00:39:24.454: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 28 00:39:24.454: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-280b9e26-25c8-4f3d-a335-0e65bd932bf2 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-280b9e26-25c8-4f3d-a335-0e65bd932bf2 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-280b9e26-25c8-4f3d-a335-0e65bd932bf2 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:44:32.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4079" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:308.538 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":143,"skipped":2416,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:44:32.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Starting the proxy Apr 28 00:44:32.924: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix205683066/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:44:32.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5347" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":275,"completed":144,"skipped":2426,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:44:33.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 28 00:44:33.063: INFO: Waiting up to 5m0s for pod "downwardapi-volume-84884945-2c9d-4bed-9c89-8492d49dbf7f" in namespace "projected-5458" to be "Succeeded or Failed" Apr 28 00:44:33.067: INFO: Pod "downwardapi-volume-84884945-2c9d-4bed-9c89-8492d49dbf7f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.4016ms Apr 28 00:44:35.071: INFO: Pod "downwardapi-volume-84884945-2c9d-4bed-9c89-8492d49dbf7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007308796s Apr 28 00:44:37.074: INFO: Pod "downwardapi-volume-84884945-2c9d-4bed-9c89-8492d49dbf7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010985297s STEP: Saw pod success Apr 28 00:44:37.074: INFO: Pod "downwardapi-volume-84884945-2c9d-4bed-9c89-8492d49dbf7f" satisfied condition "Succeeded or Failed" Apr 28 00:44:37.077: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-84884945-2c9d-4bed-9c89-8492d49dbf7f container client-container: STEP: delete the pod Apr 28 00:44:37.124: INFO: Waiting for pod downwardapi-volume-84884945-2c9d-4bed-9c89-8492d49dbf7f to disappear Apr 28 00:44:37.138: INFO: Pod downwardapi-volume-84884945-2c9d-4bed-9c89-8492d49dbf7f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:44:37.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5458" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":145,"skipped":2450,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:44:37.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 28 00:44:37.190: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7068 /api/v1/namespaces/watch-7068/configmaps/e2e-watch-test-configmap-a 6f89d5d7-d237-4d96-8d66-deed0f3ce57c 11590614 0 2020-04-28 00:44:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 28 00:44:37.190: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7068 /api/v1/namespaces/watch-7068/configmaps/e2e-watch-test-configmap-a 6f89d5d7-d237-4d96-8d66-deed0f3ce57c 11590614 0 2020-04-28 00:44:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 28 00:44:47.198: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7068 /api/v1/namespaces/watch-7068/configmaps/e2e-watch-test-configmap-a 6f89d5d7-d237-4d96-8d66-deed0f3ce57c 11590680 0 2020-04-28 00:44:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 28 00:44:47.198: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7068 /api/v1/namespaces/watch-7068/configmaps/e2e-watch-test-configmap-a 6f89d5d7-d237-4d96-8d66-deed0f3ce57c 11590680 0 2020-04-28 00:44:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 28 00:44:57.207: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7068 /api/v1/namespaces/watch-7068/configmaps/e2e-watch-test-configmap-a 6f89d5d7-d237-4d96-8d66-deed0f3ce57c 11590708 0 2020-04-28 00:44:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 28 00:44:57.207: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7068 /api/v1/namespaces/watch-7068/configmaps/e2e-watch-test-configmap-a 6f89d5d7-d237-4d96-8d66-deed0f3ce57c 11590708 0 2020-04-28 00:44:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 28 00:45:07.214: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7068 /api/v1/namespaces/watch-7068/configmaps/e2e-watch-test-configmap-a 6f89d5d7-d237-4d96-8d66-deed0f3ce57c 11590738 0 2020-04-28 00:44:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 28 00:45:07.214: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7068 /api/v1/namespaces/watch-7068/configmaps/e2e-watch-test-configmap-a 6f89d5d7-d237-4d96-8d66-deed0f3ce57c 11590738 0 2020-04-28 00:44:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 28 00:45:17.222: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7068 /api/v1/namespaces/watch-7068/configmaps/e2e-watch-test-configmap-b 0ef19861-ea90-427b-8f5b-0d9c8f6473a2 11590768 0 2020-04-28 00:45:17 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 28 00:45:17.222: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7068 /api/v1/namespaces/watch-7068/configmaps/e2e-watch-test-configmap-b 0ef19861-ea90-427b-8f5b-0d9c8f6473a2 11590768 0 2020-04-28 00:45:17 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 28 00:45:27.228: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7068 /api/v1/namespaces/watch-7068/configmaps/e2e-watch-test-configmap-b 0ef19861-ea90-427b-8f5b-0d9c8f6473a2 11590798 0 2020-04-28 00:45:17 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 28 00:45:27.228: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7068 /api/v1/namespaces/watch-7068/configmaps/e2e-watch-test-configmap-b 0ef19861-ea90-427b-8f5b-0d9c8f6473a2 11590798 0 2020-04-28 00:45:17 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:45:37.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7068" for this suite. • [SLOW TEST:60.093 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":146,"skipped":2463,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:45:37.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-fe7ab79b-94de-48f0-96d0-4dab6a191343 STEP: Creating configMap with name cm-test-opt-upd-c0989ac8-548d-4316-ad71-fbce7c6136a3 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-fe7ab79b-94de-48f0-96d0-4dab6a191343 STEP: Updating configmap cm-test-opt-upd-c0989ac8-548d-4316-ad71-fbce7c6136a3 STEP: Creating configMap with name cm-test-opt-create-31f7dc8e-ead8-4cf9-920f-56702cc7bed5 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:46:59.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2816" for this suite. • [SLOW TEST:82.497 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":147,"skipped":2480,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:46:59.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-21a7cc8d-b1ca-4e75-b6b6-fdaa208d6d28 STEP: Creating a pod to test consume secrets Apr 28 00:46:59.826: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1682b88d-8934-45a6-8cbb-7b6ee4576ffb" in namespace "projected-271" to be "Succeeded or Failed" Apr 28 00:46:59.845: INFO: Pod "pod-projected-secrets-1682b88d-8934-45a6-8cbb-7b6ee4576ffb": Phase="Pending", Reason="", readiness=false. Elapsed: 19.458843ms Apr 28 00:47:01.869: INFO: Pod "pod-projected-secrets-1682b88d-8934-45a6-8cbb-7b6ee4576ffb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043554847s Apr 28 00:47:03.873: INFO: Pod "pod-projected-secrets-1682b88d-8934-45a6-8cbb-7b6ee4576ffb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047032685s STEP: Saw pod success Apr 28 00:47:03.873: INFO: Pod "pod-projected-secrets-1682b88d-8934-45a6-8cbb-7b6ee4576ffb" satisfied condition "Succeeded or Failed" Apr 28 00:47:03.875: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-1682b88d-8934-45a6-8cbb-7b6ee4576ffb container projected-secret-volume-test: STEP: delete the pod Apr 28 00:47:03.909: INFO: Waiting for pod pod-projected-secrets-1682b88d-8934-45a6-8cbb-7b6ee4576ffb to disappear Apr 28 00:47:03.920: INFO: Pod pod-projected-secrets-1682b88d-8934-45a6-8cbb-7b6ee4576ffb no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:47:03.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-271" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":148,"skipped":2512,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:47:03.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-64665653-5bea-47d0-8a7e-539903045112 STEP: Creating a pod to test consume configMaps Apr 28 00:47:04.073: INFO: Waiting up to 5m0s for pod "pod-configmaps-995211f0-4d4d-48b7-a44b-e8629a43ab14" in namespace "configmap-4363" to be "Succeeded or Failed" Apr 28 00:47:04.100: INFO: Pod "pod-configmaps-995211f0-4d4d-48b7-a44b-e8629a43ab14": Phase="Pending", Reason="", readiness=false. Elapsed: 26.701088ms Apr 28 00:47:06.104: INFO: Pod "pod-configmaps-995211f0-4d4d-48b7-a44b-e8629a43ab14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030787652s Apr 28 00:47:08.108: INFO: Pod "pod-configmaps-995211f0-4d4d-48b7-a44b-e8629a43ab14": Phase="Running", Reason="", readiness=true. Elapsed: 4.034901714s Apr 28 00:47:10.113: INFO: Pod "pod-configmaps-995211f0-4d4d-48b7-a44b-e8629a43ab14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.039204714s STEP: Saw pod success Apr 28 00:47:10.113: INFO: Pod "pod-configmaps-995211f0-4d4d-48b7-a44b-e8629a43ab14" satisfied condition "Succeeded or Failed" Apr 28 00:47:10.116: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-995211f0-4d4d-48b7-a44b-e8629a43ab14 container configmap-volume-test: STEP: delete the pod Apr 28 00:47:10.139: INFO: Waiting for pod pod-configmaps-995211f0-4d4d-48b7-a44b-e8629a43ab14 to disappear Apr 28 00:47:10.144: INFO: Pod pod-configmaps-995211f0-4d4d-48b7-a44b-e8629a43ab14 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:47:10.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4363" for this suite. • [SLOW TEST:6.242 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":149,"skipped":2541,"failed":0} SSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:47:10.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-329 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-329;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-329 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-329;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-329.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-329.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-329.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-329.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-329.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-329.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-329.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-329.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-329.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-329.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-329.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-329.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-329.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 169.182.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.182.169_udp@PTR;check="$$(dig +tcp +noall +answer +search 169.182.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.182.169_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-329 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-329;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-329 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-329;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-329.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-329.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-329.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-329.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-329.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-329.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-329.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-329.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-329.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-329.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-329.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-329.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-329.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 169.182.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.182.169_udp@PTR;check="$$(dig +tcp +noall +answer +search 169.182.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.182.169_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 28 00:47:16.348: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:16.351: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:16.354: INFO: Unable to read wheezy_udp@dns-test-service.dns-329 from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:16.356: INFO: Unable to read wheezy_tcp@dns-test-service.dns-329 from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:16.360: INFO: Unable to read wheezy_udp@dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:16.362: INFO: Unable to read wheezy_tcp@dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:16.366: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:16.369: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:16.390: INFO: Unable to read jessie_udp@dns-test-service from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:16.393: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:16.396: INFO: Unable to read jessie_udp@dns-test-service.dns-329 from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:16.399: INFO: Unable to read jessie_tcp@dns-test-service.dns-329 from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:16.403: INFO: Unable to read jessie_udp@dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:16.406: INFO: Unable to read jessie_tcp@dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:16.409: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:16.413: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:16.432: INFO: Lookups using dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-329 wheezy_tcp@dns-test-service.dns-329 wheezy_udp@dns-test-service.dns-329.svc wheezy_tcp@dns-test-service.dns-329.svc wheezy_udp@_http._tcp.dns-test-service.dns-329.svc wheezy_tcp@_http._tcp.dns-test-service.dns-329.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-329 jessie_tcp@dns-test-service.dns-329 jessie_udp@dns-test-service.dns-329.svc jessie_tcp@dns-test-service.dns-329.svc jessie_udp@_http._tcp.dns-test-service.dns-329.svc jessie_tcp@_http._tcp.dns-test-service.dns-329.svc] Apr 28 00:47:21.437: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:21.441: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:21.444: INFO: Unable to read wheezy_udp@dns-test-service.dns-329 from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:21.448: INFO: Unable to read wheezy_tcp@dns-test-service.dns-329 from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:21.451: INFO: Unable to read wheezy_udp@dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:21.454: INFO: Unable to read wheezy_tcp@dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:21.458: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:21.461: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:21.483: INFO: Unable to read jessie_udp@dns-test-service from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:21.486: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:21.489: INFO: Unable to read jessie_udp@dns-test-service.dns-329 from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:21.493: INFO: Unable to read jessie_tcp@dns-test-service.dns-329 from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:21.495: INFO: Unable to read jessie_udp@dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:21.498: INFO: Unable to read jessie_tcp@dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:21.502: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:21.505: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:21.522: INFO: Lookups using dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-329 wheezy_tcp@dns-test-service.dns-329 wheezy_udp@dns-test-service.dns-329.svc wheezy_tcp@dns-test-service.dns-329.svc wheezy_udp@_http._tcp.dns-test-service.dns-329.svc wheezy_tcp@_http._tcp.dns-test-service.dns-329.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-329 jessie_tcp@dns-test-service.dns-329 jessie_udp@dns-test-service.dns-329.svc jessie_tcp@dns-test-service.dns-329.svc jessie_udp@_http._tcp.dns-test-service.dns-329.svc jessie_tcp@_http._tcp.dns-test-service.dns-329.svc] Apr 28 00:47:26.437: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:26.440: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:26.444: INFO: Unable to read wheezy_udp@dns-test-service.dns-329 from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:26.447: INFO: Unable to read wheezy_tcp@dns-test-service.dns-329 from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:26.450: INFO: Unable to read wheezy_udp@dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:26.453: INFO: Unable to read wheezy_tcp@dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:26.455: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:26.458: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:26.478: INFO: Unable to read jessie_udp@dns-test-service from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:26.481: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:26.483: INFO: Unable to read jessie_udp@dns-test-service.dns-329 from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:26.487: INFO: Unable to read jessie_tcp@dns-test-service.dns-329 from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:26.491: INFO: Unable to read jessie_udp@dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:26.494: INFO: Unable to read jessie_tcp@dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:26.497: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:26.500: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:26.515: INFO: Lookups using dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-329 wheezy_tcp@dns-test-service.dns-329 wheezy_udp@dns-test-service.dns-329.svc wheezy_tcp@dns-test-service.dns-329.svc wheezy_udp@_http._tcp.dns-test-service.dns-329.svc wheezy_tcp@_http._tcp.dns-test-service.dns-329.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-329 jessie_tcp@dns-test-service.dns-329 jessie_udp@dns-test-service.dns-329.svc jessie_tcp@dns-test-service.dns-329.svc jessie_udp@_http._tcp.dns-test-service.dns-329.svc jessie_tcp@_http._tcp.dns-test-service.dns-329.svc] Apr 28 00:47:31.436: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:31.439: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:31.442: INFO: Unable to read wheezy_udp@dns-test-service.dns-329 from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:31.445: INFO: Unable to read wheezy_tcp@dns-test-service.dns-329 from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:31.448: INFO: Unable to read wheezy_udp@dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:31.451: INFO: Unable to read wheezy_tcp@dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:31.454: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:31.457: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:31.476: INFO: Unable to read jessie_udp@dns-test-service from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:31.478: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:31.481: INFO: Unable to read jessie_udp@dns-test-service.dns-329 from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:31.483: INFO: Unable to read jessie_tcp@dns-test-service.dns-329 from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:31.486: INFO: Unable to read jessie_udp@dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:31.488: INFO: Unable to read jessie_tcp@dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:31.491: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:31.494: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:31.511: INFO: Lookups using dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-329 wheezy_tcp@dns-test-service.dns-329 wheezy_udp@dns-test-service.dns-329.svc wheezy_tcp@dns-test-service.dns-329.svc wheezy_udp@_http._tcp.dns-test-service.dns-329.svc wheezy_tcp@_http._tcp.dns-test-service.dns-329.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-329 jessie_tcp@dns-test-service.dns-329 jessie_udp@dns-test-service.dns-329.svc jessie_tcp@dns-test-service.dns-329.svc jessie_udp@_http._tcp.dns-test-service.dns-329.svc jessie_tcp@_http._tcp.dns-test-service.dns-329.svc] Apr 28 00:47:36.437: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:36.442: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:36.445: INFO: Unable to read wheezy_udp@dns-test-service.dns-329 from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:36.449: INFO: Unable to read wheezy_tcp@dns-test-service.dns-329 from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:36.452: INFO: Unable to read wheezy_udp@dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:36.456: INFO: Unable to read wheezy_tcp@dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:36.460: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:36.463: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:36.486: INFO: Unable to read jessie_udp@dns-test-service from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:36.489: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:36.492: INFO: Unable to read jessie_udp@dns-test-service.dns-329 from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:36.495: INFO: Unable to read jessie_tcp@dns-test-service.dns-329 from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:36.498: INFO: Unable to read jessie_udp@dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:36.502: INFO: Unable to read jessie_tcp@dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:36.505: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:36.508: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:36.527: INFO: Lookups using dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-329 wheezy_tcp@dns-test-service.dns-329 wheezy_udp@dns-test-service.dns-329.svc wheezy_tcp@dns-test-service.dns-329.svc wheezy_udp@_http._tcp.dns-test-service.dns-329.svc wheezy_tcp@_http._tcp.dns-test-service.dns-329.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-329 jessie_tcp@dns-test-service.dns-329 jessie_udp@dns-test-service.dns-329.svc jessie_tcp@dns-test-service.dns-329.svc jessie_udp@_http._tcp.dns-test-service.dns-329.svc jessie_tcp@_http._tcp.dns-test-service.dns-329.svc] Apr 28 00:47:41.437: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:41.441: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:41.445: INFO: Unable to read wheezy_udp@dns-test-service.dns-329 from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:41.449: INFO: Unable to read wheezy_tcp@dns-test-service.dns-329 from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:41.452: INFO: Unable to read wheezy_udp@dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:41.455: INFO: Unable to read wheezy_tcp@dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:41.459: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:41.462: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:41.484: INFO: Unable to read jessie_udp@dns-test-service from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:41.487: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:41.490: INFO: Unable to read jessie_udp@dns-test-service.dns-329 from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:41.493: INFO: Unable to read jessie_tcp@dns-test-service.dns-329 from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:41.497: INFO: Unable to read jessie_udp@dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:41.501: INFO: Unable to read jessie_tcp@dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:41.505: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:41.509: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-329.svc from pod dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f: the server could not find the requested resource (get pods dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f) Apr 28 00:47:41.523: INFO: Lookups using dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-329 wheezy_tcp@dns-test-service.dns-329 wheezy_udp@dns-test-service.dns-329.svc wheezy_tcp@dns-test-service.dns-329.svc wheezy_udp@_http._tcp.dns-test-service.dns-329.svc wheezy_tcp@_http._tcp.dns-test-service.dns-329.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-329 jessie_tcp@dns-test-service.dns-329 jessie_udp@dns-test-service.dns-329.svc jessie_tcp@dns-test-service.dns-329.svc jessie_udp@_http._tcp.dns-test-service.dns-329.svc jessie_tcp@_http._tcp.dns-test-service.dns-329.svc] Apr 28 00:47:46.543: INFO: DNS probes using dns-329/dns-test-d75e7bae-85e8-475a-9b0f-ecd7db178f4f succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:47:46.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-329" for this suite. • [SLOW TEST:36.762 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":150,"skipped":2546,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:47:46.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override command Apr 28 00:47:47.266: INFO: Waiting up to 5m0s for pod "client-containers-596f65b3-4602-4ee6-8aab-d692c200fc9b" in namespace "containers-8934" to be "Succeeded or Failed" Apr 28 00:47:47.284: INFO: Pod "client-containers-596f65b3-4602-4ee6-8aab-d692c200fc9b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.165117ms Apr 28 00:47:49.288: INFO: Pod "client-containers-596f65b3-4602-4ee6-8aab-d692c200fc9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021941131s Apr 28 00:47:51.292: INFO: Pod "client-containers-596f65b3-4602-4ee6-8aab-d692c200fc9b": Phase="Running", Reason="", readiness=true. Elapsed: 4.026272324s Apr 28 00:47:53.297: INFO: Pod "client-containers-596f65b3-4602-4ee6-8aab-d692c200fc9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030749027s STEP: Saw pod success Apr 28 00:47:53.297: INFO: Pod "client-containers-596f65b3-4602-4ee6-8aab-d692c200fc9b" satisfied condition "Succeeded or Failed" Apr 28 00:47:53.300: INFO: Trying to get logs from node latest-worker2 pod client-containers-596f65b3-4602-4ee6-8aab-d692c200fc9b container test-container: STEP: delete the pod Apr 28 00:47:53.336: INFO: Waiting for pod client-containers-596f65b3-4602-4ee6-8aab-d692c200fc9b to disappear Apr 28 00:47:53.348: INFO: Pod client-containers-596f65b3-4602-4ee6-8aab-d692c200fc9b no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:47:53.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8934" for this suite. • [SLOW TEST:6.423 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":151,"skipped":2602,"failed":0} [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:47:53.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 28 00:48:01.449: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 28 00:48:01.454: INFO: Pod pod-with-prestop-http-hook still exists Apr 28 00:48:03.455: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 28 00:48:03.460: INFO: Pod pod-with-prestop-http-hook still exists Apr 28 00:48:05.455: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 28 00:48:05.459: INFO: Pod pod-with-prestop-http-hook still exists Apr 28 00:48:07.455: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 28 00:48:07.458: INFO: Pod pod-with-prestop-http-hook still exists Apr 28 00:48:09.455: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 28 00:48:09.459: INFO: Pod pod-with-prestop-http-hook still exists Apr 28 00:48:11.455: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 28 00:48:11.459: INFO: Pod pod-with-prestop-http-hook still exists Apr 28 00:48:13.455: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 28 00:48:13.459: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:48:13.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5672" for this suite. • [SLOW TEST:20.116 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":152,"skipped":2602,"failed":0} SS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:48:13.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-75.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-75.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-75.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-75.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-75.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-75.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-75.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-75.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-75.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-75.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-75.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-75.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-75.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 187.167.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.167.187_udp@PTR;check="$$(dig +tcp +noall +answer +search 187.167.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.167.187_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-75.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-75.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-75.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-75.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-75.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-75.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-75.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-75.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-75.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-75.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-75.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-75.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-75.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 187.167.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.167.187_udp@PTR;check="$$(dig +tcp +noall +answer +search 187.167.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.167.187_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 28 00:48:19.634: INFO: Unable to read wheezy_udp@dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:19.637: INFO: Unable to read wheezy_tcp@dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:19.640: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:19.643: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:19.666: INFO: Unable to read jessie_udp@dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:19.668: INFO: Unable to read jessie_tcp@dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:19.670: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:19.672: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:19.687: INFO: Lookups using dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9 failed for: [wheezy_udp@dns-test-service.dns-75.svc.cluster.local wheezy_tcp@dns-test-service.dns-75.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-75.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-75.svc.cluster.local jessie_udp@dns-test-service.dns-75.svc.cluster.local jessie_tcp@dns-test-service.dns-75.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-75.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-75.svc.cluster.local] Apr 28 00:48:24.722: INFO: Unable to read wheezy_udp@dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:24.725: INFO: Unable to read wheezy_tcp@dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:24.728: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:24.731: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:24.747: INFO: Unable to read jessie_udp@dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:24.749: INFO: Unable to read jessie_tcp@dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:24.752: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:24.754: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:24.770: INFO: Lookups using dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9 failed for: [wheezy_udp@dns-test-service.dns-75.svc.cluster.local wheezy_tcp@dns-test-service.dns-75.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-75.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-75.svc.cluster.local jessie_udp@dns-test-service.dns-75.svc.cluster.local jessie_tcp@dns-test-service.dns-75.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-75.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-75.svc.cluster.local] Apr 28 00:48:29.692: INFO: Unable to read wheezy_udp@dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:29.695: INFO: Unable to read wheezy_tcp@dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:29.698: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:29.701: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:29.722: INFO: Unable to read jessie_udp@dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:29.725: INFO: Unable to read jessie_tcp@dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:29.728: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:29.731: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:29.749: INFO: Lookups using dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9 failed for: [wheezy_udp@dns-test-service.dns-75.svc.cluster.local wheezy_tcp@dns-test-service.dns-75.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-75.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-75.svc.cluster.local jessie_udp@dns-test-service.dns-75.svc.cluster.local jessie_tcp@dns-test-service.dns-75.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-75.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-75.svc.cluster.local] Apr 28 00:48:34.693: INFO: Unable to read wheezy_udp@dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:34.697: INFO: Unable to read wheezy_tcp@dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:34.700: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:34.704: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:34.728: INFO: Unable to read jessie_udp@dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:34.731: INFO: Unable to read jessie_tcp@dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:34.734: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:34.737: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:34.752: INFO: Lookups using dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9 failed for: [wheezy_udp@dns-test-service.dns-75.svc.cluster.local wheezy_tcp@dns-test-service.dns-75.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-75.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-75.svc.cluster.local jessie_udp@dns-test-service.dns-75.svc.cluster.local jessie_tcp@dns-test-service.dns-75.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-75.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-75.svc.cluster.local] Apr 28 00:48:39.700: INFO: Unable to read wheezy_udp@dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:39.703: INFO: Unable to read wheezy_tcp@dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:39.706: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:39.708: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:39.723: INFO: Unable to read jessie_udp@dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:39.725: INFO: Unable to read jessie_tcp@dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:39.728: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:39.730: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:39.746: INFO: Lookups using dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9 failed for: [wheezy_udp@dns-test-service.dns-75.svc.cluster.local wheezy_tcp@dns-test-service.dns-75.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-75.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-75.svc.cluster.local jessie_udp@dns-test-service.dns-75.svc.cluster.local jessie_tcp@dns-test-service.dns-75.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-75.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-75.svc.cluster.local] Apr 28 00:48:44.692: INFO: Unable to read wheezy_udp@dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:44.696: INFO: Unable to read wheezy_tcp@dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:44.700: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:44.703: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:44.724: INFO: Unable to read jessie_udp@dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:44.727: INFO: Unable to read jessie_tcp@dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:44.730: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:44.733: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-75.svc.cluster.local from pod dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9: the server could not find the requested resource (get pods dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9) Apr 28 00:48:44.751: INFO: Lookups using dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9 failed for: [wheezy_udp@dns-test-service.dns-75.svc.cluster.local wheezy_tcp@dns-test-service.dns-75.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-75.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-75.svc.cluster.local jessie_udp@dns-test-service.dns-75.svc.cluster.local jessie_tcp@dns-test-service.dns-75.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-75.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-75.svc.cluster.local] Apr 28 00:48:49.750: INFO: DNS probes using dns-75/dns-test-5571001c-44cf-4bd5-9316-78e4b02233c9 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:48:50.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-75" for this suite. • [SLOW TEST:37.173 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":275,"completed":153,"skipped":2604,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:48:50.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 28 00:48:55.270: INFO: Successfully updated pod "labelsupdate07aaf67d-3300-4b34-9c02-4d39bcf31921" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:48:57.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8430" for this suite. • [SLOW TEST:6.664 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":154,"skipped":2605,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:48:57.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-a1951abc-a389-4b5d-bc53-e9132d818b34 STEP: Creating secret with name s-test-opt-upd-e485375c-d564-40d1-a2f0-621f7d642798 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-a1951abc-a389-4b5d-bc53-e9132d818b34 STEP: Updating secret s-test-opt-upd-e485375c-d564-40d1-a2f0-621f7d642798 STEP: Creating secret with name s-test-opt-create-856f378d-2f41-4bed-bb5a-a4fa9540ed5e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:50:19.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-225" for this suite. • [SLOW TEST:82.542 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":155,"skipped":2610,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:50:19.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 28 00:50:20.381: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 28 00:50:22.392: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723631820, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723631820, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723631820, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723631820, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 28 00:50:25.420: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 00:50:25.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:50:26.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-2430" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.013 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":156,"skipped":2654,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:50:26.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test env composition Apr 28 00:50:26.908: INFO: Waiting up to 5m0s for pod "var-expansion-40e31c30-595c-4810-9a50-4dc027d10d9b" in namespace "var-expansion-1696" to be "Succeeded or Failed" Apr 28 00:50:26.926: INFO: Pod "var-expansion-40e31c30-595c-4810-9a50-4dc027d10d9b": Phase="Pending", Reason="", readiness=false. Elapsed: 17.128695ms Apr 28 00:50:28.930: INFO: Pod "var-expansion-40e31c30-595c-4810-9a50-4dc027d10d9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021164803s Apr 28 00:50:30.968: INFO: Pod "var-expansion-40e31c30-595c-4810-9a50-4dc027d10d9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059656559s STEP: Saw pod success Apr 28 00:50:30.968: INFO: Pod "var-expansion-40e31c30-595c-4810-9a50-4dc027d10d9b" satisfied condition "Succeeded or Failed" Apr 28 00:50:30.971: INFO: Trying to get logs from node latest-worker2 pod var-expansion-40e31c30-595c-4810-9a50-4dc027d10d9b container dapi-container: STEP: delete the pod Apr 28 00:50:31.167: INFO: Waiting for pod var-expansion-40e31c30-595c-4810-9a50-4dc027d10d9b to disappear Apr 28 00:50:31.207: INFO: Pod var-expansion-40e31c30-595c-4810-9a50-4dc027d10d9b no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:50:31.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1696" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":157,"skipped":2677,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:50:31.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-5m7qt in namespace proxy-2096 I0428 00:50:31.381358 7 runners.go:190] Created replication controller with name: proxy-service-5m7qt, namespace: proxy-2096, replica count: 1 I0428 00:50:32.431813 7 runners.go:190] proxy-service-5m7qt Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0428 00:50:33.432045 7 runners.go:190] proxy-service-5m7qt Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0428 00:50:34.432202 7 runners.go:190] proxy-service-5m7qt Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0428 00:50:35.432457 7 runners.go:190] proxy-service-5m7qt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0428 00:50:36.432705 7 runners.go:190] proxy-service-5m7qt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0428 00:50:37.432932 7 runners.go:190] proxy-service-5m7qt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0428 00:50:38.433200 7 runners.go:190] proxy-service-5m7qt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0428 00:50:39.433442 7 runners.go:190] proxy-service-5m7qt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0428 00:50:40.433644 7 runners.go:190] proxy-service-5m7qt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0428 00:50:41.433871 7 runners.go:190] proxy-service-5m7qt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0428 00:50:42.434152 7 runners.go:190] proxy-service-5m7qt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0428 00:50:43.434381 7 runners.go:190] proxy-service-5m7qt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0428 00:50:44.434664 7 runners.go:190] proxy-service-5m7qt Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 28 00:50:44.439: INFO: setup took 13.098671702s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 28 00:50:44.445: INFO: (0) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:1080/proxy/: ... (200; 6.529557ms) Apr 28 00:50:44.448: INFO: (0) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:162/proxy/: bar (200; 9.033576ms) Apr 28 00:50:44.449: INFO: (0) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:1080/proxy/: test<... (200; 9.869781ms) Apr 28 00:50:44.449: INFO: (0) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:162/proxy/: bar (200; 9.992234ms) Apr 28 00:50:44.449: INFO: (0) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm/proxy/: test (200; 10.514271ms) Apr 28 00:50:44.451: INFO: (0) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:160/proxy/: foo (200; 12.623334ms) Apr 28 00:50:44.452: INFO: (0) /api/v1/namespaces/proxy-2096/services/http:proxy-service-5m7qt:portname2/proxy/: bar (200; 12.606391ms) Apr 28 00:50:44.452: INFO: (0) /api/v1/namespaces/proxy-2096/services/proxy-service-5m7qt:portname2/proxy/: bar (200; 12.529851ms) Apr 28 00:50:44.452: INFO: (0) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:160/proxy/: foo (200; 12.751478ms) Apr 28 00:50:44.452: INFO: (0) /api/v1/namespaces/proxy-2096/services/proxy-service-5m7qt:portname1/proxy/: foo (200; 13.014012ms) Apr 28 00:50:44.453: INFO: (0) /api/v1/namespaces/proxy-2096/services/http:proxy-service-5m7qt:portname1/proxy/: foo (200; 14.42438ms) Apr 28 00:50:44.457: INFO: (0) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:462/proxy/: tls qux (200; 18.277233ms) Apr 28 00:50:44.457: INFO: (0) /api/v1/namespaces/proxy-2096/services/https:proxy-service-5m7qt:tlsportname2/proxy/: tls qux (200; 18.41508ms) Apr 28 00:50:44.457: INFO: (0) /api/v1/namespaces/proxy-2096/services/https:proxy-service-5m7qt:tlsportname1/proxy/: tls baz (200; 16.930643ms) Apr 28 00:50:44.457: INFO: (0) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:443/proxy/: test<... (200; 4.318955ms) Apr 28 00:50:44.462: INFO: (1) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:160/proxy/: foo (200; 4.281747ms) Apr 28 00:50:44.463: INFO: (1) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:443/proxy/: test (200; 4.98109ms) Apr 28 00:50:44.463: INFO: (1) /api/v1/namespaces/proxy-2096/services/proxy-service-5m7qt:portname1/proxy/: foo (200; 5.002047ms) Apr 28 00:50:44.464: INFO: (1) /api/v1/namespaces/proxy-2096/services/https:proxy-service-5m7qt:tlsportname1/proxy/: tls baz (200; 5.951647ms) Apr 28 00:50:44.464: INFO: (1) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:1080/proxy/: ... (200; 6.271476ms) Apr 28 00:50:44.464: INFO: (1) /api/v1/namespaces/proxy-2096/services/proxy-service-5m7qt:portname2/proxy/: bar (200; 6.552145ms) Apr 28 00:50:44.464: INFO: (1) /api/v1/namespaces/proxy-2096/services/http:proxy-service-5m7qt:portname2/proxy/: bar (200; 6.623204ms) Apr 28 00:50:44.464: INFO: (1) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:462/proxy/: tls qux (200; 6.631713ms) Apr 28 00:50:44.464: INFO: (1) /api/v1/namespaces/proxy-2096/services/https:proxy-service-5m7qt:tlsportname2/proxy/: tls qux (200; 6.652459ms) Apr 28 00:50:44.464: INFO: (1) /api/v1/namespaces/proxy-2096/services/http:proxy-service-5m7qt:portname1/proxy/: foo (200; 6.733215ms) Apr 28 00:50:44.464: INFO: (1) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:160/proxy/: foo (200; 6.661533ms) Apr 28 00:50:44.464: INFO: (1) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:460/proxy/: tls baz (200; 6.658107ms) Apr 28 00:50:44.467: INFO: (2) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:160/proxy/: foo (200; 2.509131ms) Apr 28 00:50:44.467: INFO: (2) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:460/proxy/: tls baz (200; 2.444765ms) Apr 28 00:50:44.470: INFO: (2) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:1080/proxy/: test<... (200; 5.048502ms) Apr 28 00:50:44.470: INFO: (2) /api/v1/namespaces/proxy-2096/services/http:proxy-service-5m7qt:portname1/proxy/: foo (200; 5.320923ms) Apr 28 00:50:44.470: INFO: (2) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:443/proxy/: test (200; 6.038288ms) Apr 28 00:50:44.470: INFO: (2) /api/v1/namespaces/proxy-2096/services/https:proxy-service-5m7qt:tlsportname1/proxy/: tls baz (200; 5.963537ms) Apr 28 00:50:44.471: INFO: (2) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:162/proxy/: bar (200; 6.156995ms) Apr 28 00:50:44.471: INFO: (2) /api/v1/namespaces/proxy-2096/services/http:proxy-service-5m7qt:portname2/proxy/: bar (200; 6.225877ms) Apr 28 00:50:44.471: INFO: (2) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:160/proxy/: foo (200; 6.198674ms) Apr 28 00:50:44.471: INFO: (2) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:462/proxy/: tls qux (200; 6.257143ms) Apr 28 00:50:44.471: INFO: (2) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:1080/proxy/: ... (200; 6.21514ms) Apr 28 00:50:44.471: INFO: (2) /api/v1/namespaces/proxy-2096/services/https:proxy-service-5m7qt:tlsportname2/proxy/: tls qux (200; 6.398487ms) Apr 28 00:50:44.483: INFO: (3) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:162/proxy/: bar (200; 11.640731ms) Apr 28 00:50:44.487: INFO: (3) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:160/proxy/: foo (200; 15.678137ms) Apr 28 00:50:44.487: INFO: (3) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:1080/proxy/: ... (200; 15.678593ms) Apr 28 00:50:44.487: INFO: (3) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:162/proxy/: bar (200; 15.701632ms) Apr 28 00:50:44.531: INFO: (3) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:1080/proxy/: test<... (200; 60.282236ms) Apr 28 00:50:44.531: INFO: (3) /api/v1/namespaces/proxy-2096/services/proxy-service-5m7qt:portname1/proxy/: foo (200; 60.306196ms) Apr 28 00:50:44.531: INFO: (3) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm/proxy/: test (200; 60.38364ms) Apr 28 00:50:44.531: INFO: (3) /api/v1/namespaces/proxy-2096/services/http:proxy-service-5m7qt:portname2/proxy/: bar (200; 60.360482ms) Apr 28 00:50:44.531: INFO: (3) /api/v1/namespaces/proxy-2096/services/https:proxy-service-5m7qt:tlsportname2/proxy/: tls qux (200; 60.477736ms) Apr 28 00:50:44.531: INFO: (3) /api/v1/namespaces/proxy-2096/services/proxy-service-5m7qt:portname2/proxy/: bar (200; 60.450726ms) Apr 28 00:50:44.531: INFO: (3) /api/v1/namespaces/proxy-2096/services/https:proxy-service-5m7qt:tlsportname1/proxy/: tls baz (200; 60.402332ms) Apr 28 00:50:44.531: INFO: (3) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:462/proxy/: tls qux (200; 60.380459ms) Apr 28 00:50:44.531: INFO: (3) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:443/proxy/: ... (200; 4.233085ms) Apr 28 00:50:44.536: INFO: (4) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:443/proxy/: test (200; 4.454759ms) Apr 28 00:50:44.537: INFO: (4) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:1080/proxy/: test<... (200; 4.771623ms) Apr 28 00:50:44.537: INFO: (4) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:162/proxy/: bar (200; 4.835409ms) Apr 28 00:50:44.537: INFO: (4) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:462/proxy/: tls qux (200; 4.971677ms) Apr 28 00:50:44.537: INFO: (4) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:160/proxy/: foo (200; 4.958399ms) Apr 28 00:50:44.537: INFO: (4) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:162/proxy/: bar (200; 4.970842ms) Apr 28 00:50:44.537: INFO: (4) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:460/proxy/: tls baz (200; 5.015224ms) Apr 28 00:50:44.537: INFO: (4) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:160/proxy/: foo (200; 5.244224ms) Apr 28 00:50:44.539: INFO: (4) /api/v1/namespaces/proxy-2096/services/proxy-service-5m7qt:portname2/proxy/: bar (200; 7.067491ms) Apr 28 00:50:44.539: INFO: (4) /api/v1/namespaces/proxy-2096/services/https:proxy-service-5m7qt:tlsportname2/proxy/: tls qux (200; 7.097769ms) Apr 28 00:50:44.539: INFO: (4) /api/v1/namespaces/proxy-2096/services/http:proxy-service-5m7qt:portname2/proxy/: bar (200; 7.047831ms) Apr 28 00:50:44.539: INFO: (4) /api/v1/namespaces/proxy-2096/services/http:proxy-service-5m7qt:portname1/proxy/: foo (200; 7.149735ms) Apr 28 00:50:44.539: INFO: (4) /api/v1/namespaces/proxy-2096/services/proxy-service-5m7qt:portname1/proxy/: foo (200; 7.229171ms) Apr 28 00:50:44.539: INFO: (4) /api/v1/namespaces/proxy-2096/services/https:proxy-service-5m7qt:tlsportname1/proxy/: tls baz (200; 7.47467ms) Apr 28 00:50:44.543: INFO: (5) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:160/proxy/: foo (200; 3.362418ms) Apr 28 00:50:44.543: INFO: (5) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:160/proxy/: foo (200; 3.610551ms) Apr 28 00:50:44.543: INFO: (5) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:1080/proxy/: test<... (200; 3.614535ms) Apr 28 00:50:44.543: INFO: (5) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm/proxy/: test (200; 3.712064ms) Apr 28 00:50:44.545: INFO: (5) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:162/proxy/: bar (200; 5.269023ms) Apr 28 00:50:44.545: INFO: (5) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:1080/proxy/: ... (200; 5.391837ms) Apr 28 00:50:44.545: INFO: (5) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:460/proxy/: tls baz (200; 5.378772ms) Apr 28 00:50:44.545: INFO: (5) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:443/proxy/: test<... (200; 4.127454ms) Apr 28 00:50:44.550: INFO: (6) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm/proxy/: test (200; 4.243538ms) Apr 28 00:50:44.551: INFO: (6) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:1080/proxy/: ... (200; 4.364563ms) Apr 28 00:50:44.551: INFO: (6) /api/v1/namespaces/proxy-2096/services/proxy-service-5m7qt:portname2/proxy/: bar (200; 4.427282ms) Apr 28 00:50:44.551: INFO: (6) /api/v1/namespaces/proxy-2096/services/http:proxy-service-5m7qt:portname1/proxy/: foo (200; 4.346935ms) Apr 28 00:50:44.551: INFO: (6) /api/v1/namespaces/proxy-2096/services/http:proxy-service-5m7qt:portname2/proxy/: bar (200; 4.408896ms) Apr 28 00:50:44.551: INFO: (6) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:160/proxy/: foo (200; 4.372168ms) Apr 28 00:50:44.551: INFO: (6) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:460/proxy/: tls baz (200; 4.836432ms) Apr 28 00:50:44.552: INFO: (6) /api/v1/namespaces/proxy-2096/services/https:proxy-service-5m7qt:tlsportname2/proxy/: tls qux (200; 5.687471ms) Apr 28 00:50:44.552: INFO: (6) /api/v1/namespaces/proxy-2096/services/proxy-service-5m7qt:portname1/proxy/: foo (200; 5.760163ms) Apr 28 00:50:44.552: INFO: (6) /api/v1/namespaces/proxy-2096/services/https:proxy-service-5m7qt:tlsportname1/proxy/: tls baz (200; 5.722781ms) Apr 28 00:50:44.554: INFO: (7) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:460/proxy/: tls baz (200; 1.901671ms) Apr 28 00:50:44.556: INFO: (7) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:1080/proxy/: test<... (200; 4.074763ms) Apr 28 00:50:44.556: INFO: (7) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:1080/proxy/: ... (200; 4.15602ms) Apr 28 00:50:44.556: INFO: (7) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm/proxy/: test (200; 4.365546ms) Apr 28 00:50:44.556: INFO: (7) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:162/proxy/: bar (200; 4.328898ms) Apr 28 00:50:44.556: INFO: (7) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:160/proxy/: foo (200; 4.325644ms) Apr 28 00:50:44.556: INFO: (7) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:160/proxy/: foo (200; 4.346526ms) Apr 28 00:50:44.557: INFO: (7) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:462/proxy/: tls qux (200; 4.424615ms) Apr 28 00:50:44.557: INFO: (7) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:443/proxy/: test (200; 3.197739ms) Apr 28 00:50:44.561: INFO: (8) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:1080/proxy/: test<... (200; 3.400172ms) Apr 28 00:50:44.561: INFO: (8) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:160/proxy/: foo (200; 3.400053ms) Apr 28 00:50:44.562: INFO: (8) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:160/proxy/: foo (200; 4.183272ms) Apr 28 00:50:44.562: INFO: (8) /api/v1/namespaces/proxy-2096/services/http:proxy-service-5m7qt:portname1/proxy/: foo (200; 4.107069ms) Apr 28 00:50:44.562: INFO: (8) /api/v1/namespaces/proxy-2096/services/https:proxy-service-5m7qt:tlsportname2/proxy/: tls qux (200; 4.171259ms) Apr 28 00:50:44.562: INFO: (8) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:162/proxy/: bar (200; 4.162585ms) Apr 28 00:50:44.562: INFO: (8) /api/v1/namespaces/proxy-2096/services/http:proxy-service-5m7qt:portname2/proxy/: bar (200; 4.153125ms) Apr 28 00:50:44.562: INFO: (8) /api/v1/namespaces/proxy-2096/services/https:proxy-service-5m7qt:tlsportname1/proxy/: tls baz (200; 4.248332ms) Apr 28 00:50:44.562: INFO: (8) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:162/proxy/: bar (200; 4.286906ms) Apr 28 00:50:44.562: INFO: (8) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:1080/proxy/: ... (200; 4.559664ms) Apr 28 00:50:44.562: INFO: (8) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:462/proxy/: tls qux (200; 4.604494ms) Apr 28 00:50:44.563: INFO: (8) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:460/proxy/: tls baz (200; 4.724718ms) Apr 28 00:50:44.563: INFO: (8) /api/v1/namespaces/proxy-2096/services/proxy-service-5m7qt:portname1/proxy/: foo (200; 4.622133ms) Apr 28 00:50:44.563: INFO: (8) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:443/proxy/: test (200; 4.454465ms) Apr 28 00:50:44.567: INFO: (9) /api/v1/namespaces/proxy-2096/services/https:proxy-service-5m7qt:tlsportname1/proxy/: tls baz (200; 4.40716ms) Apr 28 00:50:44.567: INFO: (9) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:1080/proxy/: ... (200; 4.380451ms) Apr 28 00:50:44.567: INFO: (9) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:1080/proxy/: test<... (200; 4.430639ms) Apr 28 00:50:44.567: INFO: (9) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:460/proxy/: tls baz (200; 4.480253ms) Apr 28 00:50:44.567: INFO: (9) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:160/proxy/: foo (200; 4.491631ms) Apr 28 00:50:44.567: INFO: (9) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:162/proxy/: bar (200; 4.534798ms) Apr 28 00:50:44.569: INFO: (10) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:160/proxy/: foo (200; 1.876138ms) Apr 28 00:50:44.571: INFO: (10) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:162/proxy/: bar (200; 3.860481ms) Apr 28 00:50:44.571: INFO: (10) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:160/proxy/: foo (200; 3.858303ms) Apr 28 00:50:44.571: INFO: (10) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:460/proxy/: tls baz (200; 3.904193ms) Apr 28 00:50:44.571: INFO: (10) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:1080/proxy/: test<... (200; 3.929144ms) Apr 28 00:50:44.571: INFO: (10) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm/proxy/: test (200; 3.931712ms) Apr 28 00:50:44.571: INFO: (10) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:162/proxy/: bar (200; 3.955018ms) Apr 28 00:50:44.571: INFO: (10) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:1080/proxy/: ... (200; 3.885017ms) Apr 28 00:50:44.571: INFO: (10) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:443/proxy/: test<... (200; 3.03533ms) Apr 28 00:50:44.576: INFO: (11) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:443/proxy/: ... (200; 3.541588ms) Apr 28 00:50:44.576: INFO: (11) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:460/proxy/: tls baz (200; 3.653102ms) Apr 28 00:50:44.576: INFO: (11) /api/v1/namespaces/proxy-2096/services/http:proxy-service-5m7qt:portname2/proxy/: bar (200; 4.035753ms) Apr 28 00:50:44.576: INFO: (11) /api/v1/namespaces/proxy-2096/services/proxy-service-5m7qt:portname1/proxy/: foo (200; 4.060367ms) Apr 28 00:50:44.576: INFO: (11) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm/proxy/: test (200; 4.032307ms) Apr 28 00:50:44.577: INFO: (11) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:162/proxy/: bar (200; 4.232869ms) Apr 28 00:50:44.577: INFO: (11) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:162/proxy/: bar (200; 4.284703ms) Apr 28 00:50:44.577: INFO: (11) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:462/proxy/: tls qux (200; 4.416172ms) Apr 28 00:50:44.577: INFO: (11) /api/v1/namespaces/proxy-2096/services/http:proxy-service-5m7qt:portname1/proxy/: foo (200; 4.441422ms) Apr 28 00:50:44.577: INFO: (11) /api/v1/namespaces/proxy-2096/services/proxy-service-5m7qt:portname2/proxy/: bar (200; 4.564603ms) Apr 28 00:50:44.577: INFO: (11) /api/v1/namespaces/proxy-2096/services/https:proxy-service-5m7qt:tlsportname2/proxy/: tls qux (200; 4.59023ms) Apr 28 00:50:44.577: INFO: (11) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:160/proxy/: foo (200; 4.74128ms) Apr 28 00:50:44.577: INFO: (11) /api/v1/namespaces/proxy-2096/services/https:proxy-service-5m7qt:tlsportname1/proxy/: tls baz (200; 4.739138ms) Apr 28 00:50:44.579: INFO: (12) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm/proxy/: test (200; 1.870244ms) Apr 28 00:50:44.580: INFO: (12) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:462/proxy/: tls qux (200; 2.108059ms) Apr 28 00:50:44.580: INFO: (12) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:460/proxy/: tls baz (200; 2.334805ms) Apr 28 00:50:44.581: INFO: (12) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:1080/proxy/: test<... (200; 3.174273ms) Apr 28 00:50:44.581: INFO: (12) /api/v1/namespaces/proxy-2096/services/http:proxy-service-5m7qt:portname2/proxy/: bar (200; 4.022227ms) Apr 28 00:50:44.581: INFO: (12) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:443/proxy/: ... (200; 4.462747ms) Apr 28 00:50:44.583: INFO: (12) /api/v1/namespaces/proxy-2096/services/http:proxy-service-5m7qt:portname1/proxy/: foo (200; 4.975053ms) Apr 28 00:50:44.583: INFO: (12) /api/v1/namespaces/proxy-2096/services/proxy-service-5m7qt:portname1/proxy/: foo (200; 5.100679ms) Apr 28 00:50:44.583: INFO: (12) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:162/proxy/: bar (200; 4.789505ms) Apr 28 00:50:44.583: INFO: (12) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:162/proxy/: bar (200; 4.751746ms) Apr 28 00:50:44.583: INFO: (12) /api/v1/namespaces/proxy-2096/services/https:proxy-service-5m7qt:tlsportname2/proxy/: tls qux (200; 4.807178ms) Apr 28 00:50:44.583: INFO: (12) /api/v1/namespaces/proxy-2096/services/https:proxy-service-5m7qt:tlsportname1/proxy/: tls baz (200; 4.966846ms) Apr 28 00:50:44.585: INFO: (13) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:160/proxy/: foo (200; 2.330579ms) Apr 28 00:50:44.585: INFO: (13) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm/proxy/: test (200; 2.560184ms) Apr 28 00:50:44.585: INFO: (13) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:460/proxy/: tls baz (200; 2.5446ms) Apr 28 00:50:44.586: INFO: (13) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:160/proxy/: foo (200; 3.304603ms) Apr 28 00:50:44.586: INFO: (13) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:462/proxy/: tls qux (200; 3.478487ms) Apr 28 00:50:44.586: INFO: (13) /api/v1/namespaces/proxy-2096/services/https:proxy-service-5m7qt:tlsportname1/proxy/: tls baz (200; 3.710381ms) Apr 28 00:50:44.587: INFO: (13) /api/v1/namespaces/proxy-2096/services/http:proxy-service-5m7qt:portname2/proxy/: bar (200; 3.907938ms) Apr 28 00:50:44.587: INFO: (13) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:162/proxy/: bar (200; 4.129677ms) Apr 28 00:50:44.587: INFO: (13) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:443/proxy/: test<... (200; 5.390962ms) Apr 28 00:50:44.589: INFO: (13) /api/v1/namespaces/proxy-2096/services/https:proxy-service-5m7qt:tlsportname2/proxy/: tls qux (200; 6.372899ms) Apr 28 00:50:44.589: INFO: (13) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:1080/proxy/: ... (200; 6.403038ms) Apr 28 00:50:44.589: INFO: (13) /api/v1/namespaces/proxy-2096/services/http:proxy-service-5m7qt:portname1/proxy/: foo (200; 6.440925ms) Apr 28 00:50:44.589: INFO: (13) /api/v1/namespaces/proxy-2096/services/proxy-service-5m7qt:portname1/proxy/: foo (200; 6.514151ms) Apr 28 00:50:44.591: INFO: (13) /api/v1/namespaces/proxy-2096/services/proxy-service-5m7qt:portname2/proxy/: bar (200; 8.150527ms) Apr 28 00:50:44.595: INFO: (14) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:1080/proxy/: test<... (200; 4.174294ms) Apr 28 00:50:44.595: INFO: (14) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:162/proxy/: bar (200; 4.3349ms) Apr 28 00:50:44.595: INFO: (14) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm/proxy/: test (200; 4.399186ms) Apr 28 00:50:44.595: INFO: (14) /api/v1/namespaces/proxy-2096/services/http:proxy-service-5m7qt:portname2/proxy/: bar (200; 4.33074ms) Apr 28 00:50:44.595: INFO: (14) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:160/proxy/: foo (200; 4.356118ms) Apr 28 00:50:44.596: INFO: (14) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:443/proxy/: ... (200; 4.570637ms) Apr 28 00:50:44.596: INFO: (14) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:460/proxy/: tls baz (200; 4.572672ms) Apr 28 00:50:44.596: INFO: (14) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:462/proxy/: tls qux (200; 4.610394ms) Apr 28 00:50:44.597: INFO: (14) /api/v1/namespaces/proxy-2096/services/proxy-service-5m7qt:portname1/proxy/: foo (200; 5.668314ms) Apr 28 00:50:44.597: INFO: (14) /api/v1/namespaces/proxy-2096/services/http:proxy-service-5m7qt:portname1/proxy/: foo (200; 5.727594ms) Apr 28 00:50:44.597: INFO: (14) /api/v1/namespaces/proxy-2096/services/proxy-service-5m7qt:portname2/proxy/: bar (200; 6.254782ms) Apr 28 00:50:44.597: INFO: (14) /api/v1/namespaces/proxy-2096/services/https:proxy-service-5m7qt:tlsportname1/proxy/: tls baz (200; 6.191712ms) Apr 28 00:50:44.600: INFO: (15) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:160/proxy/: foo (200; 2.399139ms) Apr 28 00:50:44.600: INFO: (15) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:162/proxy/: bar (200; 2.486023ms) Apr 28 00:50:44.600: INFO: (15) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:462/proxy/: tls qux (200; 2.42617ms) Apr 28 00:50:44.600: INFO: (15) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:443/proxy/: ... (200; 4.541648ms) Apr 28 00:50:44.602: INFO: (15) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:160/proxy/: foo (200; 4.546007ms) Apr 28 00:50:44.602: INFO: (15) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:1080/proxy/: test<... (200; 4.568306ms) Apr 28 00:50:44.602: INFO: (15) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm/proxy/: test (200; 4.658206ms) Apr 28 00:50:44.602: INFO: (15) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:460/proxy/: tls baz (200; 4.672562ms) Apr 28 00:50:44.602: INFO: (15) /api/v1/namespaces/proxy-2096/services/http:proxy-service-5m7qt:portname1/proxy/: foo (200; 4.629238ms) Apr 28 00:50:44.602: INFO: (15) /api/v1/namespaces/proxy-2096/services/https:proxy-service-5m7qt:tlsportname2/proxy/: tls qux (200; 4.928071ms) Apr 28 00:50:44.602: INFO: (15) /api/v1/namespaces/proxy-2096/services/http:proxy-service-5m7qt:portname2/proxy/: bar (200; 5.033601ms) Apr 28 00:50:44.603: INFO: (15) /api/v1/namespaces/proxy-2096/services/https:proxy-service-5m7qt:tlsportname1/proxy/: tls baz (200; 5.232981ms) Apr 28 00:50:44.603: INFO: (15) /api/v1/namespaces/proxy-2096/services/proxy-service-5m7qt:portname1/proxy/: foo (200; 5.301091ms) Apr 28 00:50:44.605: INFO: (16) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:443/proxy/: test<... (200; 3.291227ms) Apr 28 00:50:44.606: INFO: (16) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:162/proxy/: bar (200; 3.48874ms) Apr 28 00:50:44.606: INFO: (16) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:1080/proxy/: ... (200; 3.556674ms) Apr 28 00:50:44.606: INFO: (16) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm/proxy/: test (200; 3.560217ms) Apr 28 00:50:44.606: INFO: (16) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:160/proxy/: foo (200; 3.675119ms) Apr 28 00:50:44.607: INFO: (16) /api/v1/namespaces/proxy-2096/services/proxy-service-5m7qt:portname1/proxy/: foo (200; 3.706348ms) Apr 28 00:50:44.607: INFO: (16) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:162/proxy/: bar (200; 3.884613ms) Apr 28 00:50:44.607: INFO: (16) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:460/proxy/: tls baz (200; 3.980118ms) Apr 28 00:50:44.607: INFO: (16) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:462/proxy/: tls qux (200; 3.97033ms) Apr 28 00:50:44.608: INFO: (16) /api/v1/namespaces/proxy-2096/services/http:proxy-service-5m7qt:portname2/proxy/: bar (200; 5.163163ms) Apr 28 00:50:44.608: INFO: (16) /api/v1/namespaces/proxy-2096/services/proxy-service-5m7qt:portname2/proxy/: bar (200; 5.161936ms) Apr 28 00:50:44.608: INFO: (16) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:160/proxy/: foo (200; 5.174665ms) Apr 28 00:50:44.608: INFO: (16) /api/v1/namespaces/proxy-2096/services/http:proxy-service-5m7qt:portname1/proxy/: foo (200; 5.255822ms) Apr 28 00:50:44.608: INFO: (16) /api/v1/namespaces/proxy-2096/services/https:proxy-service-5m7qt:tlsportname2/proxy/: tls qux (200; 5.467679ms) Apr 28 00:50:44.609: INFO: (16) /api/v1/namespaces/proxy-2096/services/https:proxy-service-5m7qt:tlsportname1/proxy/: tls baz (200; 5.928679ms) Apr 28 00:50:44.613: INFO: (17) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:162/proxy/: bar (200; 3.702387ms) Apr 28 00:50:44.613: INFO: (17) /api/v1/namespaces/proxy-2096/services/http:proxy-service-5m7qt:portname1/proxy/: foo (200; 3.9661ms) Apr 28 00:50:44.613: INFO: (17) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:160/proxy/: foo (200; 4.414866ms) Apr 28 00:50:44.613: INFO: (17) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:460/proxy/: tls baz (200; 4.380596ms) Apr 28 00:50:44.613: INFO: (17) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:1080/proxy/: test<... (200; 4.409652ms) Apr 28 00:50:44.613: INFO: (17) /api/v1/namespaces/proxy-2096/services/https:proxy-service-5m7qt:tlsportname2/proxy/: tls qux (200; 4.503858ms) Apr 28 00:50:44.613: INFO: (17) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:160/proxy/: foo (200; 4.447801ms) Apr 28 00:50:44.613: INFO: (17) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm/proxy/: test (200; 4.521224ms) Apr 28 00:50:44.613: INFO: (17) /api/v1/namespaces/proxy-2096/services/proxy-service-5m7qt:portname2/proxy/: bar (200; 4.563745ms) Apr 28 00:50:44.613: INFO: (17) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:162/proxy/: bar (200; 4.70874ms) Apr 28 00:50:44.613: INFO: (17) /api/v1/namespaces/proxy-2096/services/https:proxy-service-5m7qt:tlsportname1/proxy/: tls baz (200; 4.641956ms) Apr 28 00:50:44.613: INFO: (17) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:443/proxy/: ... (200; 4.708912ms) Apr 28 00:50:44.614: INFO: (17) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:462/proxy/: tls qux (200; 4.691447ms) Apr 28 00:50:44.617: INFO: (18) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:160/proxy/: foo (200; 3.208031ms) Apr 28 00:50:44.617: INFO: (18) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:160/proxy/: foo (200; 3.236403ms) Apr 28 00:50:44.617: INFO: (18) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:162/proxy/: bar (200; 3.269451ms) Apr 28 00:50:44.617: INFO: (18) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:1080/proxy/: test<... (200; 3.383356ms) Apr 28 00:50:44.617: INFO: (18) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm/proxy/: test (200; 3.365897ms) Apr 28 00:50:44.617: INFO: (18) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:443/proxy/: ... (200; 4.068683ms) Apr 28 00:50:44.618: INFO: (18) /api/v1/namespaces/proxy-2096/services/proxy-service-5m7qt:portname2/proxy/: bar (200; 4.007144ms) Apr 28 00:50:44.618: INFO: (18) /api/v1/namespaces/proxy-2096/services/http:proxy-service-5m7qt:portname2/proxy/: bar (200; 4.027914ms) Apr 28 00:50:44.618: INFO: (18) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:162/proxy/: bar (200; 4.37608ms) Apr 28 00:50:44.618: INFO: (18) /api/v1/namespaces/proxy-2096/services/proxy-service-5m7qt:portname1/proxy/: foo (200; 4.487384ms) Apr 28 00:50:44.618: INFO: (18) /api/v1/namespaces/proxy-2096/services/https:proxy-service-5m7qt:tlsportname2/proxy/: tls qux (200; 4.651098ms) Apr 28 00:50:44.618: INFO: (18) /api/v1/namespaces/proxy-2096/services/https:proxy-service-5m7qt:tlsportname1/proxy/: tls baz (200; 4.740385ms) Apr 28 00:50:44.622: INFO: (19) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:160/proxy/: foo (200; 3.557539ms) Apr 28 00:50:44.622: INFO: (19) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm/proxy/: test (200; 3.57479ms) Apr 28 00:50:44.622: INFO: (19) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:1080/proxy/: test<... (200; 3.694576ms) Apr 28 00:50:44.622: INFO: (19) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:160/proxy/: foo (200; 3.648887ms) Apr 28 00:50:44.622: INFO: (19) /api/v1/namespaces/proxy-2096/pods/proxy-service-5m7qt-xkgtm:162/proxy/: bar (200; 3.714854ms) Apr 28 00:50:44.622: INFO: (19) /api/v1/namespaces/proxy-2096/pods/http:proxy-service-5m7qt-xkgtm:1080/proxy/: ... (200; 3.65779ms) Apr 28 00:50:44.622: INFO: (19) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:460/proxy/: tls baz (200; 3.766584ms) Apr 28 00:50:44.622: INFO: (19) /api/v1/namespaces/proxy-2096/pods/https:proxy-service-5m7qt-xkgtm:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:50:47.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1120" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":159,"skipped":2722,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:50:47.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-2efeeb67-ecff-4869-a9d0-058f18faf8af STEP: Creating a pod to test consume configMaps Apr 28 00:50:47.842: INFO: Waiting up to 5m0s for pod "pod-configmaps-69e2da57-3996-4acc-a03a-dbb9d6ac2664" in namespace "configmap-7024" to be "Succeeded or Failed" Apr 28 00:50:47.855: INFO: Pod "pod-configmaps-69e2da57-3996-4acc-a03a-dbb9d6ac2664": Phase="Pending", Reason="", readiness=false. Elapsed: 12.601872ms Apr 28 00:50:49.859: INFO: Pod "pod-configmaps-69e2da57-3996-4acc-a03a-dbb9d6ac2664": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016843101s Apr 28 00:50:51.863: INFO: Pod "pod-configmaps-69e2da57-3996-4acc-a03a-dbb9d6ac2664": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020822706s STEP: Saw pod success Apr 28 00:50:51.863: INFO: Pod "pod-configmaps-69e2da57-3996-4acc-a03a-dbb9d6ac2664" satisfied condition "Succeeded or Failed" Apr 28 00:50:51.866: INFO: Trying to get logs from node latest-worker pod pod-configmaps-69e2da57-3996-4acc-a03a-dbb9d6ac2664 container configmap-volume-test: STEP: delete the pod Apr 28 00:50:51.904: INFO: Waiting for pod pod-configmaps-69e2da57-3996-4acc-a03a-dbb9d6ac2664 to disappear Apr 28 00:50:51.913: INFO: Pod pod-configmaps-69e2da57-3996-4acc-a03a-dbb9d6ac2664 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:50:51.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7024" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":160,"skipped":2733,"failed":0} S ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:50:51.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 28 00:50:52.018: INFO: Waiting up to 5m0s for pod "downward-api-7859d3f8-c2bb-4c07-b6d3-fd0c1184518f" in namespace "downward-api-5374" to be "Succeeded or Failed" Apr 28 00:50:52.021: INFO: Pod "downward-api-7859d3f8-c2bb-4c07-b6d3-fd0c1184518f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.560255ms Apr 28 00:50:54.034: INFO: Pod "downward-api-7859d3f8-c2bb-4c07-b6d3-fd0c1184518f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016089391s Apr 28 00:50:56.038: INFO: Pod "downward-api-7859d3f8-c2bb-4c07-b6d3-fd0c1184518f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02032037s STEP: Saw pod success Apr 28 00:50:56.038: INFO: Pod "downward-api-7859d3f8-c2bb-4c07-b6d3-fd0c1184518f" satisfied condition "Succeeded or Failed" Apr 28 00:50:56.041: INFO: Trying to get logs from node latest-worker2 pod downward-api-7859d3f8-c2bb-4c07-b6d3-fd0c1184518f container dapi-container: STEP: delete the pod Apr 28 00:50:56.097: INFO: Waiting for pod downward-api-7859d3f8-c2bb-4c07-b6d3-fd0c1184518f to disappear Apr 28 00:50:56.107: INFO: Pod downward-api-7859d3f8-c2bb-4c07-b6d3-fd0c1184518f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:50:56.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5374" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":161,"skipped":2734,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:50:56.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 28 00:50:56.216: INFO: Waiting up to 5m0s for pod "pod-2c49422f-c6ee-4573-a224-0c0434357088" in namespace "emptydir-9144" to be "Succeeded or Failed" Apr 28 00:50:56.219: INFO: Pod "pod-2c49422f-c6ee-4573-a224-0c0434357088": Phase="Pending", Reason="", readiness=false. Elapsed: 3.579038ms Apr 28 00:50:58.223: INFO: Pod "pod-2c49422f-c6ee-4573-a224-0c0434357088": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007097998s Apr 28 00:51:00.304: INFO: Pod "pod-2c49422f-c6ee-4573-a224-0c0434357088": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.08800904s STEP: Saw pod success Apr 28 00:51:00.304: INFO: Pod "pod-2c49422f-c6ee-4573-a224-0c0434357088" satisfied condition "Succeeded or Failed" Apr 28 00:51:00.306: INFO: Trying to get logs from node latest-worker pod pod-2c49422f-c6ee-4573-a224-0c0434357088 container test-container: STEP: delete the pod Apr 28 00:51:00.666: INFO: Waiting for pod pod-2c49422f-c6ee-4573-a224-0c0434357088 to disappear Apr 28 00:51:00.681: INFO: Pod pod-2c49422f-c6ee-4573-a224-0c0434357088 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:51:00.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9144" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":162,"skipped":2748,"failed":0} SSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:51:00.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-ade9236b-6dfe-40a2-9c24-05e89645d081 in namespace container-probe-3030 Apr 28 00:51:04.810: INFO: Started pod liveness-ade9236b-6dfe-40a2-9c24-05e89645d081 in namespace container-probe-3030 STEP: checking the pod's current state and verifying that restartCount is present Apr 28 00:51:04.812: INFO: Initial restart count of pod liveness-ade9236b-6dfe-40a2-9c24-05e89645d081 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:55:05.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3030" for this suite. • [SLOW TEST:244.717 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":163,"skipped":2752,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:55:05.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service endpoint-test2 in namespace services-6659 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6659 to expose endpoints map[] Apr 28 00:55:05.745: INFO: Get endpoints failed (37.179792ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Apr 28 00:55:06.749: INFO: successfully validated that service endpoint-test2 in namespace services-6659 exposes endpoints map[] (1.040878164s elapsed) STEP: Creating pod pod1 in namespace services-6659 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6659 to expose endpoints map[pod1:[80]] Apr 28 00:55:10.820: INFO: successfully validated that service endpoint-test2 in namespace services-6659 exposes endpoints map[pod1:[80]] (4.064960228s elapsed) STEP: Creating pod pod2 in namespace services-6659 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6659 to expose endpoints map[pod1:[80] pod2:[80]] Apr 28 00:55:14.961: INFO: successfully validated that service endpoint-test2 in namespace services-6659 exposes endpoints map[pod1:[80] pod2:[80]] (4.136833478s elapsed) STEP: Deleting pod pod1 in namespace services-6659 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6659 to expose endpoints map[pod2:[80]] Apr 28 00:55:16.003: INFO: successfully validated that service endpoint-test2 in namespace services-6659 exposes endpoints map[pod2:[80]] (1.037787933s elapsed) STEP: Deleting pod pod2 in namespace services-6659 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6659 to expose endpoints map[] Apr 28 00:55:16.128: INFO: successfully validated that service endpoint-test2 in namespace services-6659 exposes endpoints map[] (119.110045ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:55:16.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6659" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:10.942 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":275,"completed":164,"skipped":2777,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:55:16.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:55:20.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-4455" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":165,"skipped":2787,"failed":0} SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:55:20.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9127.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-9127.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9127.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-9127.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9127.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9127.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-9127.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9127.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-9127.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9127.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 28 00:55:26.674: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:26.680: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:26.682: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:26.685: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:26.692: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:26.695: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:26.697: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:26.700: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:26.706: INFO: Lookups using dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9127.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9127.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local jessie_udp@dns-test-service-2.dns-9127.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9127.svc.cluster.local] Apr 28 00:55:31.711: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:31.715: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:31.718: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:31.721: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:31.731: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:31.735: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:31.738: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:31.741: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:31.752: INFO: Lookups using dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9127.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9127.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local jessie_udp@dns-test-service-2.dns-9127.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9127.svc.cluster.local] Apr 28 00:55:36.710: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:36.714: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:36.717: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:36.720: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:36.730: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:36.734: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:36.737: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:36.739: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:36.746: INFO: Lookups using dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9127.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9127.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local jessie_udp@dns-test-service-2.dns-9127.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9127.svc.cluster.local] Apr 28 00:55:41.713: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:41.716: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:41.719: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:41.721: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:41.728: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:41.730: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:41.732: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:41.735: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:41.740: INFO: Lookups using dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9127.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9127.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local jessie_udp@dns-test-service-2.dns-9127.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9127.svc.cluster.local] Apr 28 00:55:46.710: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:46.714: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:46.718: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:46.721: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:46.729: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:46.731: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:46.733: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:46.736: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:46.741: INFO: Lookups using dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9127.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9127.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local jessie_udp@dns-test-service-2.dns-9127.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9127.svc.cluster.local] Apr 28 00:55:51.712: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:51.716: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:51.719: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:51.721: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:51.728: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:51.729: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:51.731: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:51.734: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9127.svc.cluster.local from pod dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0: the server could not find the requested resource (get pods dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0) Apr 28 00:55:51.738: INFO: Lookups using dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9127.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9127.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9127.svc.cluster.local jessie_udp@dns-test-service-2.dns-9127.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9127.svc.cluster.local] Apr 28 00:55:56.747: INFO: DNS probes using dns-9127/dns-test-12fe54f9-2664-4cd9-93dc-76b7382612b0 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:55:56.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9127" for this suite. • [SLOW TEST:36.311 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":166,"skipped":2796,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:55:56.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-cf56fa70-b2db-4e2f-9249-d69a681e1ad3 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:56:03.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8317" for this suite. • [SLOW TEST:6.686 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":167,"skipped":2803,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:56:03.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 28 00:56:08.179: INFO: Successfully updated pod "labelsupdateccebc82b-bf0e-4112-9b11-94182fafdbe9" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:56:10.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5316" for this suite. • [SLOW TEST:6.645 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":168,"skipped":2830,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:56:10.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating pod Apr 28 00:56:14.271: INFO: Pod pod-hostip-fa680c73-1ad8-4ac4-8aa4-9fe40ab1915c has hostIP: 172.17.0.12 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:56:14.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9259" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":169,"skipped":2862,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:56:14.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 28 00:56:14.358: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c3ba273f-f324-41fa-a5f1-8907f0558cd7" in namespace "projected-7000" to be "Succeeded or Failed" Apr 28 00:56:14.362: INFO: Pod "downwardapi-volume-c3ba273f-f324-41fa-a5f1-8907f0558cd7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.734755ms Apr 28 00:56:16.366: INFO: Pod "downwardapi-volume-c3ba273f-f324-41fa-a5f1-8907f0558cd7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007910676s Apr 28 00:56:18.369: INFO: Pod "downwardapi-volume-c3ba273f-f324-41fa-a5f1-8907f0558cd7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011101406s STEP: Saw pod success Apr 28 00:56:18.370: INFO: Pod "downwardapi-volume-c3ba273f-f324-41fa-a5f1-8907f0558cd7" satisfied condition "Succeeded or Failed" Apr 28 00:56:18.372: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-c3ba273f-f324-41fa-a5f1-8907f0558cd7 container client-container: STEP: delete the pod Apr 28 00:56:18.395: INFO: Waiting for pod downwardapi-volume-c3ba273f-f324-41fa-a5f1-8907f0558cd7 to disappear Apr 28 00:56:18.398: INFO: Pod downwardapi-volume-c3ba273f-f324-41fa-a5f1-8907f0558cd7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:56:18.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7000" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":170,"skipped":2870,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:56:18.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:56:22.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-715" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":171,"skipped":2884,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:56:22.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test hostPath mode Apr 28 00:56:22.641: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-7932" to be "Succeeded or Failed" Apr 28 00:56:22.645: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.623356ms Apr 28 00:56:24.686: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044068667s Apr 28 00:56:26.689: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 4.047365067s Apr 28 00:56:28.693: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.051652972s STEP: Saw pod success Apr 28 00:56:28.693: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Apr 28 00:56:28.696: INFO: Trying to get logs from node latest-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Apr 28 00:56:28.732: INFO: Waiting for pod pod-host-path-test to disappear Apr 28 00:56:28.748: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:56:28.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-7932" for this suite. • [SLOW TEST:6.259 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":172,"skipped":2912,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:56:28.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 28 00:56:29.584: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 28 00:56:31.650: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632189, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632189, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632189, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632189, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 28 00:56:34.682: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:56:34.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1788" for this suite. STEP: Destroying namespace "webhook-1788-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.173 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":173,"skipped":2912,"failed":0} SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:56:34.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-b552180a-29b6-4652-bf52-6463a6aeaec1 STEP: Creating a pod to test consume secrets Apr 28 00:56:35.003: INFO: Waiting up to 5m0s for pod "pod-secrets-4bfeac2a-d73a-453a-8281-9a582e44ffd6" in namespace "secrets-5632" to be "Succeeded or Failed" Apr 28 00:56:35.009: INFO: Pod "pod-secrets-4bfeac2a-d73a-453a-8281-9a582e44ffd6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.617708ms Apr 28 00:56:37.017: INFO: Pod "pod-secrets-4bfeac2a-d73a-453a-8281-9a582e44ffd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013465477s Apr 28 00:56:39.021: INFO: Pod "pod-secrets-4bfeac2a-d73a-453a-8281-9a582e44ffd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017424916s STEP: Saw pod success Apr 28 00:56:39.021: INFO: Pod "pod-secrets-4bfeac2a-d73a-453a-8281-9a582e44ffd6" satisfied condition "Succeeded or Failed" Apr 28 00:56:39.024: INFO: Trying to get logs from node latest-worker pod pod-secrets-4bfeac2a-d73a-453a-8281-9a582e44ffd6 container secret-volume-test: STEP: delete the pod Apr 28 00:56:39.089: INFO: Waiting for pod pod-secrets-4bfeac2a-d73a-453a-8281-9a582e44ffd6 to disappear Apr 28 00:56:39.100: INFO: Pod pod-secrets-4bfeac2a-d73a-453a-8281-9a582e44ffd6 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:56:39.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5632" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":174,"skipped":2914,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:56:39.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 28 00:56:39.950: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 28 00:56:41.967: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632199, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632199, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632199, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632199, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 28 00:56:44.995: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:56:45.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2600" for this suite. STEP: Destroying namespace "webhook-2600-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.146 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":175,"skipped":2916,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:56:45.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Apr 28 00:56:45.371: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Apr 28 00:56:56.857: INFO: >>> kubeConfig: /root/.kube/config Apr 28 00:56:58.762: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:57:09.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-806" for this suite. • [SLOW TEST:24.084 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":176,"skipped":2961,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:57:09.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-3463 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating stateful set ss in namespace statefulset-3463 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3463 Apr 28 00:57:09.492: INFO: Found 0 stateful pods, waiting for 1 Apr 28 00:57:19.497: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 28 00:57:19.503: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3463 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 28 00:57:22.470: INFO: stderr: "I0428 00:57:22.338629 2409 log.go:172] (0xc00089a630) (0xc0007160a0) Create stream\nI0428 00:57:22.338657 2409 log.go:172] (0xc00089a630) (0xc0007160a0) Stream added, broadcasting: 1\nI0428 00:57:22.341499 2409 log.go:172] (0xc00089a630) Reply frame received for 1\nI0428 00:57:22.341570 2409 log.go:172] (0xc00089a630) (0xc000740000) Create stream\nI0428 00:57:22.341596 2409 log.go:172] (0xc00089a630) (0xc000740000) Stream added, broadcasting: 3\nI0428 00:57:22.342656 2409 log.go:172] (0xc00089a630) Reply frame received for 3\nI0428 00:57:22.342715 2409 log.go:172] (0xc00089a630) (0xc0007400a0) Create stream\nI0428 00:57:22.342728 2409 log.go:172] (0xc00089a630) (0xc0007400a0) Stream added, broadcasting: 5\nI0428 00:57:22.343941 2409 log.go:172] (0xc00089a630) Reply frame received for 5\nI0428 00:57:22.427025 2409 log.go:172] (0xc00089a630) Data frame received for 5\nI0428 00:57:22.427056 2409 log.go:172] (0xc0007400a0) (5) Data frame handling\nI0428 00:57:22.427079 2409 log.go:172] (0xc0007400a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0428 00:57:22.461264 2409 log.go:172] (0xc00089a630) Data frame received for 3\nI0428 00:57:22.461331 2409 log.go:172] (0xc000740000) (3) Data frame handling\nI0428 00:57:22.461358 2409 log.go:172] (0xc000740000) (3) Data frame sent\nI0428 00:57:22.461380 2409 log.go:172] (0xc00089a630) Data frame received for 3\nI0428 00:57:22.461418 2409 log.go:172] (0xc000740000) (3) Data frame handling\nI0428 00:57:22.461478 2409 log.go:172] (0xc00089a630) Data frame received for 5\nI0428 00:57:22.461503 2409 log.go:172] (0xc0007400a0) (5) Data frame handling\nI0428 00:57:22.463594 2409 log.go:172] (0xc00089a630) Data frame received for 1\nI0428 00:57:22.463622 2409 log.go:172] (0xc0007160a0) (1) Data frame handling\nI0428 00:57:22.463638 2409 log.go:172] (0xc0007160a0) (1) Data frame sent\nI0428 00:57:22.463771 2409 log.go:172] (0xc00089a630) (0xc0007160a0) Stream removed, broadcasting: 1\nI0428 00:57:22.463888 2409 log.go:172] (0xc00089a630) Go away received\nI0428 00:57:22.464215 2409 log.go:172] (0xc00089a630) (0xc0007160a0) Stream removed, broadcasting: 1\nI0428 00:57:22.464236 2409 log.go:172] (0xc00089a630) (0xc000740000) Stream removed, broadcasting: 3\nI0428 00:57:22.464250 2409 log.go:172] (0xc00089a630) (0xc0007400a0) Stream removed, broadcasting: 5\n" Apr 28 00:57:22.470: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 28 00:57:22.470: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 28 00:57:22.474: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 28 00:57:32.478: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 28 00:57:32.478: INFO: Waiting for statefulset status.replicas updated to 0 Apr 28 00:57:32.488: INFO: POD NODE PHASE GRACE CONDITIONS Apr 28 00:57:32.488: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:09 +0000 UTC }] Apr 28 00:57:32.488: INFO: Apr 28 00:57:32.488: INFO: StatefulSet ss has not reached scale 3, at 1 Apr 28 00:57:33.492: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996227069s Apr 28 00:57:34.807: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.992093389s Apr 28 00:57:35.812: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.677506456s Apr 28 00:57:36.817: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.672515755s Apr 28 00:57:37.823: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.667005544s Apr 28 00:57:38.828: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.661635294s Apr 28 00:57:39.834: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.656320015s Apr 28 00:57:40.839: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.650563629s Apr 28 00:57:41.845: INFO: Verifying statefulset ss doesn't scale past 3 for another 645.262771ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3463 Apr 28 00:57:42.851: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3463 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 28 00:57:43.083: INFO: stderr: "I0428 00:57:42.999100 2438 log.go:172] (0xc000acb340) (0xc000acc8c0) Create stream\nI0428 00:57:42.999168 2438 log.go:172] (0xc000acb340) (0xc000acc8c0) Stream added, broadcasting: 1\nI0428 00:57:43.001669 2438 log.go:172] (0xc000acb340) Reply frame received for 1\nI0428 00:57:43.001706 2438 log.go:172] (0xc000acb340) (0xc00097e5a0) Create stream\nI0428 00:57:43.001726 2438 log.go:172] (0xc000acb340) (0xc00097e5a0) Stream added, broadcasting: 3\nI0428 00:57:43.002537 2438 log.go:172] (0xc000acb340) Reply frame received for 3\nI0428 00:57:43.002566 2438 log.go:172] (0xc000acb340) (0xc000a503c0) Create stream\nI0428 00:57:43.002589 2438 log.go:172] (0xc000acb340) (0xc000a503c0) Stream added, broadcasting: 5\nI0428 00:57:43.003353 2438 log.go:172] (0xc000acb340) Reply frame received for 5\nI0428 00:57:43.077463 2438 log.go:172] (0xc000acb340) Data frame received for 5\nI0428 00:57:43.077498 2438 log.go:172] (0xc000a503c0) (5) Data frame handling\nI0428 00:57:43.077510 2438 log.go:172] (0xc000a503c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0428 00:57:43.077530 2438 log.go:172] (0xc000acb340) Data frame received for 3\nI0428 00:57:43.077535 2438 log.go:172] (0xc00097e5a0) (3) Data frame handling\nI0428 00:57:43.077540 2438 log.go:172] (0xc00097e5a0) (3) Data frame sent\nI0428 00:57:43.077646 2438 log.go:172] (0xc000acb340) Data frame received for 5\nI0428 00:57:43.077673 2438 log.go:172] (0xc000a503c0) (5) Data frame handling\nI0428 00:57:43.077852 2438 log.go:172] (0xc000acb340) Data frame received for 3\nI0428 00:57:43.077874 2438 log.go:172] (0xc00097e5a0) (3) Data frame handling\nI0428 00:57:43.079102 2438 log.go:172] (0xc000acb340) Data frame received for 1\nI0428 00:57:43.079126 2438 log.go:172] (0xc000acc8c0) (1) Data frame handling\nI0428 00:57:43.079140 2438 log.go:172] (0xc000acc8c0) (1) Data frame sent\nI0428 00:57:43.079155 2438 log.go:172] (0xc000acb340) (0xc000acc8c0) Stream removed, broadcasting: 1\nI0428 00:57:43.079180 2438 log.go:172] (0xc000acb340) Go away received\nI0428 00:57:43.079603 2438 log.go:172] (0xc000acb340) (0xc000acc8c0) Stream removed, broadcasting: 1\nI0428 00:57:43.079625 2438 log.go:172] (0xc000acb340) (0xc00097e5a0) Stream removed, broadcasting: 3\nI0428 00:57:43.079635 2438 log.go:172] (0xc000acb340) (0xc000a503c0) Stream removed, broadcasting: 5\n" Apr 28 00:57:43.084: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 28 00:57:43.084: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 28 00:57:43.084: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3463 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 28 00:57:43.313: INFO: stderr: "I0428 00:57:43.221374 2458 log.go:172] (0xc000abefd0) (0xc000afc6e0) Create stream\nI0428 00:57:43.221459 2458 log.go:172] (0xc000abefd0) (0xc000afc6e0) Stream added, broadcasting: 1\nI0428 00:57:43.224662 2458 log.go:172] (0xc000abefd0) Reply frame received for 1\nI0428 00:57:43.224728 2458 log.go:172] (0xc000abefd0) (0xc0009c6320) Create stream\nI0428 00:57:43.224758 2458 log.go:172] (0xc000abefd0) (0xc0009c6320) Stream added, broadcasting: 3\nI0428 00:57:43.226112 2458 log.go:172] (0xc000abefd0) Reply frame received for 3\nI0428 00:57:43.226179 2458 log.go:172] (0xc000abefd0) (0xc000afc780) Create stream\nI0428 00:57:43.226218 2458 log.go:172] (0xc000abefd0) (0xc000afc780) Stream added, broadcasting: 5\nI0428 00:57:43.227327 2458 log.go:172] (0xc000abefd0) Reply frame received for 5\nI0428 00:57:43.304932 2458 log.go:172] (0xc000abefd0) Data frame received for 5\nI0428 00:57:43.304963 2458 log.go:172] (0xc000afc780) (5) Data frame handling\nI0428 00:57:43.304976 2458 log.go:172] (0xc000afc780) (5) Data frame sent\nI0428 00:57:43.305007 2458 log.go:172] (0xc000abefd0) Data frame received for 5\nI0428 00:57:43.305017 2458 log.go:172] (0xc000afc780) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0428 00:57:43.305049 2458 log.go:172] (0xc000abefd0) Data frame received for 3\nI0428 00:57:43.305080 2458 log.go:172] (0xc0009c6320) (3) Data frame handling\nI0428 00:57:43.305287 2458 log.go:172] (0xc0009c6320) (3) Data frame sent\nI0428 00:57:43.305330 2458 log.go:172] (0xc000abefd0) Data frame received for 3\nI0428 00:57:43.305360 2458 log.go:172] (0xc0009c6320) (3) Data frame handling\nI0428 00:57:43.307169 2458 log.go:172] (0xc000abefd0) Data frame received for 1\nI0428 00:57:43.307202 2458 log.go:172] (0xc000afc6e0) (1) Data frame handling\nI0428 00:57:43.307219 2458 log.go:172] (0xc000afc6e0) (1) Data frame sent\nI0428 00:57:43.307237 2458 log.go:172] (0xc000abefd0) (0xc000afc6e0) Stream removed, broadcasting: 1\nI0428 00:57:43.307258 2458 log.go:172] (0xc000abefd0) Go away received\nI0428 00:57:43.307807 2458 log.go:172] (0xc000abefd0) (0xc000afc6e0) Stream removed, broadcasting: 1\nI0428 00:57:43.307835 2458 log.go:172] (0xc000abefd0) (0xc0009c6320) Stream removed, broadcasting: 3\nI0428 00:57:43.307849 2458 log.go:172] (0xc000abefd0) (0xc000afc780) Stream removed, broadcasting: 5\n" Apr 28 00:57:43.313: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 28 00:57:43.313: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 28 00:57:43.313: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3463 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 28 00:57:43.538: INFO: stderr: "I0428 00:57:43.463619 2478 log.go:172] (0xc00068a790) (0xc000805180) Create stream\nI0428 00:57:43.463710 2478 log.go:172] (0xc00068a790) (0xc000805180) Stream added, broadcasting: 1\nI0428 00:57:43.466802 2478 log.go:172] (0xc00068a790) Reply frame received for 1\nI0428 00:57:43.466849 2478 log.go:172] (0xc00068a790) (0xc0009b0000) Create stream\nI0428 00:57:43.466866 2478 log.go:172] (0xc00068a790) (0xc0009b0000) Stream added, broadcasting: 3\nI0428 00:57:43.467794 2478 log.go:172] (0xc00068a790) Reply frame received for 3\nI0428 00:57:43.467816 2478 log.go:172] (0xc00068a790) (0xc000805360) Create stream\nI0428 00:57:43.467823 2478 log.go:172] (0xc00068a790) (0xc000805360) Stream added, broadcasting: 5\nI0428 00:57:43.468744 2478 log.go:172] (0xc00068a790) Reply frame received for 5\nI0428 00:57:43.530126 2478 log.go:172] (0xc00068a790) Data frame received for 3\nI0428 00:57:43.530177 2478 log.go:172] (0xc0009b0000) (3) Data frame handling\nI0428 00:57:43.530201 2478 log.go:172] (0xc0009b0000) (3) Data frame sent\nI0428 00:57:43.530217 2478 log.go:172] (0xc00068a790) Data frame received for 3\nI0428 00:57:43.530228 2478 log.go:172] (0xc0009b0000) (3) Data frame handling\nI0428 00:57:43.530280 2478 log.go:172] (0xc00068a790) Data frame received for 5\nI0428 00:57:43.530337 2478 log.go:172] (0xc000805360) (5) Data frame handling\nI0428 00:57:43.530372 2478 log.go:172] (0xc000805360) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0428 00:57:43.530940 2478 log.go:172] (0xc00068a790) Data frame received for 5\nI0428 00:57:43.530977 2478 log.go:172] (0xc000805360) (5) Data frame handling\nI0428 00:57:43.532891 2478 log.go:172] (0xc00068a790) Data frame received for 1\nI0428 00:57:43.532928 2478 log.go:172] (0xc000805180) (1) Data frame handling\nI0428 00:57:43.532946 2478 log.go:172] (0xc000805180) (1) Data frame sent\nI0428 00:57:43.532976 2478 log.go:172] (0xc00068a790) (0xc000805180) Stream removed, broadcasting: 1\nI0428 00:57:43.533505 2478 log.go:172] (0xc00068a790) Go away received\nI0428 00:57:43.533548 2478 log.go:172] (0xc00068a790) (0xc000805180) Stream removed, broadcasting: 1\nI0428 00:57:43.533570 2478 log.go:172] (0xc00068a790) (0xc0009b0000) Stream removed, broadcasting: 3\nI0428 00:57:43.533593 2478 log.go:172] (0xc00068a790) (0xc000805360) Stream removed, broadcasting: 5\n" Apr 28 00:57:43.538: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 28 00:57:43.538: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 28 00:57:43.542: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 28 00:57:43.542: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 28 00:57:43.542: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Apr 28 00:57:43.545: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3463 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 28 00:57:43.755: INFO: stderr: "I0428 00:57:43.675156 2499 log.go:172] (0xc000900160) (0xc00093c000) Create stream\nI0428 00:57:43.675211 2499 log.go:172] (0xc000900160) (0xc00093c000) Stream added, broadcasting: 1\nI0428 00:57:43.677976 2499 log.go:172] (0xc000900160) Reply frame received for 1\nI0428 00:57:43.678059 2499 log.go:172] (0xc000900160) (0xc0009bc000) Create stream\nI0428 00:57:43.678085 2499 log.go:172] (0xc000900160) (0xc0009bc000) Stream added, broadcasting: 3\nI0428 00:57:43.679254 2499 log.go:172] (0xc000900160) Reply frame received for 3\nI0428 00:57:43.679276 2499 log.go:172] (0xc000900160) (0xc00093c0a0) Create stream\nI0428 00:57:43.679290 2499 log.go:172] (0xc000900160) (0xc00093c0a0) Stream added, broadcasting: 5\nI0428 00:57:43.680276 2499 log.go:172] (0xc000900160) Reply frame received for 5\nI0428 00:57:43.749746 2499 log.go:172] (0xc000900160) Data frame received for 5\nI0428 00:57:43.749848 2499 log.go:172] (0xc00093c0a0) (5) Data frame handling\nI0428 00:57:43.749861 2499 log.go:172] (0xc00093c0a0) (5) Data frame sent\nI0428 00:57:43.749867 2499 log.go:172] (0xc000900160) Data frame received for 5\nI0428 00:57:43.749875 2499 log.go:172] (0xc00093c0a0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0428 00:57:43.749898 2499 log.go:172] (0xc000900160) Data frame received for 3\nI0428 00:57:43.749906 2499 log.go:172] (0xc0009bc000) (3) Data frame handling\nI0428 00:57:43.749913 2499 log.go:172] (0xc0009bc000) (3) Data frame sent\nI0428 00:57:43.749918 2499 log.go:172] (0xc000900160) Data frame received for 3\nI0428 00:57:43.749922 2499 log.go:172] (0xc0009bc000) (3) Data frame handling\nI0428 00:57:43.751283 2499 log.go:172] (0xc000900160) Data frame received for 1\nI0428 00:57:43.751325 2499 log.go:172] (0xc00093c000) (1) Data frame handling\nI0428 00:57:43.751349 2499 log.go:172] (0xc00093c000) (1) Data frame sent\nI0428 00:57:43.751373 2499 log.go:172] (0xc000900160) (0xc00093c000) Stream removed, broadcasting: 1\nI0428 00:57:43.751413 2499 log.go:172] (0xc000900160) Go away received\nI0428 00:57:43.751705 2499 log.go:172] (0xc000900160) (0xc00093c000) Stream removed, broadcasting: 1\nI0428 00:57:43.751723 2499 log.go:172] (0xc000900160) (0xc0009bc000) Stream removed, broadcasting: 3\nI0428 00:57:43.751731 2499 log.go:172] (0xc000900160) (0xc00093c0a0) Stream removed, broadcasting: 5\n" Apr 28 00:57:43.755: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 28 00:57:43.755: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 28 00:57:43.755: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3463 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 28 00:57:43.969: INFO: stderr: "I0428 00:57:43.877068 2519 log.go:172] (0xc0008e2420) (0xc0008b4320) Create stream\nI0428 00:57:43.877243 2519 log.go:172] (0xc0008e2420) (0xc0008b4320) Stream added, broadcasting: 1\nI0428 00:57:43.880475 2519 log.go:172] (0xc0008e2420) Reply frame received for 1\nI0428 00:57:43.880551 2519 log.go:172] (0xc0008e2420) (0xc00040b4a0) Create stream\nI0428 00:57:43.880574 2519 log.go:172] (0xc0008e2420) (0xc00040b4a0) Stream added, broadcasting: 3\nI0428 00:57:43.881725 2519 log.go:172] (0xc0008e2420) Reply frame received for 3\nI0428 00:57:43.881773 2519 log.go:172] (0xc0008e2420) (0xc0008c2000) Create stream\nI0428 00:57:43.881786 2519 log.go:172] (0xc0008e2420) (0xc0008c2000) Stream added, broadcasting: 5\nI0428 00:57:43.882691 2519 log.go:172] (0xc0008e2420) Reply frame received for 5\nI0428 00:57:43.940692 2519 log.go:172] (0xc0008e2420) Data frame received for 5\nI0428 00:57:43.940725 2519 log.go:172] (0xc0008c2000) (5) Data frame handling\nI0428 00:57:43.940766 2519 log.go:172] (0xc0008c2000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0428 00:57:43.960487 2519 log.go:172] (0xc0008e2420) Data frame received for 3\nI0428 00:57:43.960518 2519 log.go:172] (0xc00040b4a0) (3) Data frame handling\nI0428 00:57:43.960545 2519 log.go:172] (0xc00040b4a0) (3) Data frame sent\nI0428 00:57:43.960658 2519 log.go:172] (0xc0008e2420) Data frame received for 3\nI0428 00:57:43.960682 2519 log.go:172] (0xc00040b4a0) (3) Data frame handling\nI0428 00:57:43.960817 2519 log.go:172] (0xc0008e2420) Data frame received for 5\nI0428 00:57:43.960829 2519 log.go:172] (0xc0008c2000) (5) Data frame handling\nI0428 00:57:43.962785 2519 log.go:172] (0xc0008e2420) Data frame received for 1\nI0428 00:57:43.962843 2519 log.go:172] (0xc0008b4320) (1) Data frame handling\nI0428 00:57:43.962872 2519 log.go:172] (0xc0008b4320) (1) Data frame sent\nI0428 00:57:43.962900 2519 log.go:172] (0xc0008e2420) (0xc0008b4320) Stream removed, broadcasting: 1\nI0428 00:57:43.962922 2519 log.go:172] (0xc0008e2420) Go away received\nI0428 00:57:43.963360 2519 log.go:172] (0xc0008e2420) (0xc0008b4320) Stream removed, broadcasting: 1\nI0428 00:57:43.963396 2519 log.go:172] (0xc0008e2420) (0xc00040b4a0) Stream removed, broadcasting: 3\nI0428 00:57:43.963422 2519 log.go:172] (0xc0008e2420) (0xc0008c2000) Stream removed, broadcasting: 5\n" Apr 28 00:57:43.969: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 28 00:57:43.969: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 28 00:57:43.969: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3463 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 28 00:57:44.180: INFO: stderr: "I0428 00:57:44.089890 2541 log.go:172] (0xc0000e9ef0) (0xc00093c960) Create stream\nI0428 00:57:44.089941 2541 log.go:172] (0xc0000e9ef0) (0xc00093c960) Stream added, broadcasting: 1\nI0428 00:57:44.094041 2541 log.go:172] (0xc0000e9ef0) Reply frame received for 1\nI0428 00:57:44.094081 2541 log.go:172] (0xc0000e9ef0) (0xc000711720) Create stream\nI0428 00:57:44.094088 2541 log.go:172] (0xc0000e9ef0) (0xc000711720) Stream added, broadcasting: 3\nI0428 00:57:44.094920 2541 log.go:172] (0xc0000e9ef0) Reply frame received for 3\nI0428 00:57:44.094948 2541 log.go:172] (0xc0000e9ef0) (0xc0005e0b40) Create stream\nI0428 00:57:44.094967 2541 log.go:172] (0xc0000e9ef0) (0xc0005e0b40) Stream added, broadcasting: 5\nI0428 00:57:44.095778 2541 log.go:172] (0xc0000e9ef0) Reply frame received for 5\nI0428 00:57:44.148255 2541 log.go:172] (0xc0000e9ef0) Data frame received for 5\nI0428 00:57:44.148277 2541 log.go:172] (0xc0005e0b40) (5) Data frame handling\nI0428 00:57:44.148284 2541 log.go:172] (0xc0005e0b40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0428 00:57:44.173930 2541 log.go:172] (0xc0000e9ef0) Data frame received for 5\nI0428 00:57:44.173976 2541 log.go:172] (0xc0005e0b40) (5) Data frame handling\nI0428 00:57:44.174002 2541 log.go:172] (0xc0000e9ef0) Data frame received for 3\nI0428 00:57:44.174019 2541 log.go:172] (0xc000711720) (3) Data frame handling\nI0428 00:57:44.174037 2541 log.go:172] (0xc000711720) (3) Data frame sent\nI0428 00:57:44.174047 2541 log.go:172] (0xc0000e9ef0) Data frame received for 3\nI0428 00:57:44.174059 2541 log.go:172] (0xc000711720) (3) Data frame handling\nI0428 00:57:44.175358 2541 log.go:172] (0xc0000e9ef0) Data frame received for 1\nI0428 00:57:44.175378 2541 log.go:172] (0xc00093c960) (1) Data frame handling\nI0428 00:57:44.175403 2541 log.go:172] (0xc00093c960) (1) Data frame sent\nI0428 00:57:44.175451 2541 log.go:172] (0xc0000e9ef0) (0xc00093c960) Stream removed, broadcasting: 1\nI0428 00:57:44.175477 2541 log.go:172] (0xc0000e9ef0) Go away received\nI0428 00:57:44.175803 2541 log.go:172] (0xc0000e9ef0) (0xc00093c960) Stream removed, broadcasting: 1\nI0428 00:57:44.175820 2541 log.go:172] (0xc0000e9ef0) (0xc000711720) Stream removed, broadcasting: 3\nI0428 00:57:44.175827 2541 log.go:172] (0xc0000e9ef0) (0xc0005e0b40) Stream removed, broadcasting: 5\n" Apr 28 00:57:44.180: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 28 00:57:44.180: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 28 00:57:44.180: INFO: Waiting for statefulset status.replicas updated to 0 Apr 28 00:57:44.197: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 28 00:57:54.205: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 28 00:57:54.205: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 28 00:57:54.205: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 28 00:57:54.239: INFO: POD NODE PHASE GRACE CONDITIONS Apr 28 00:57:54.240: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:09 +0000 UTC }] Apr 28 00:57:54.240: INFO: ss-1 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:32 +0000 UTC }] Apr 28 00:57:54.240: INFO: ss-2 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:32 +0000 UTC }] Apr 28 00:57:54.240: INFO: Apr 28 00:57:54.240: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 28 00:57:55.244: INFO: POD NODE PHASE GRACE CONDITIONS Apr 28 00:57:55.244: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:09 +0000 UTC }] Apr 28 00:57:55.244: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:32 +0000 UTC }] Apr 28 00:57:55.244: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:32 +0000 UTC }] Apr 28 00:57:55.245: INFO: Apr 28 00:57:55.245: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 28 00:57:56.279: INFO: POD NODE PHASE GRACE CONDITIONS Apr 28 00:57:56.279: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:09 +0000 UTC }] Apr 28 00:57:56.279: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:32 +0000 UTC }] Apr 28 00:57:56.279: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:32 +0000 UTC }] Apr 28 00:57:56.279: INFO: Apr 28 00:57:56.279: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 28 00:57:57.289: INFO: POD NODE PHASE GRACE CONDITIONS Apr 28 00:57:57.289: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:09 +0000 UTC }] Apr 28 00:57:57.289: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 00:57:32 +0000 UTC }] Apr 28 00:57:57.289: INFO: Apr 28 00:57:57.289: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 28 00:57:58.293: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.935987084s Apr 28 00:57:59.297: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.931687568s Apr 28 00:58:00.301: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.927453417s Apr 28 00:58:01.305: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.923389136s Apr 28 00:58:02.310: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.919337634s Apr 28 00:58:03.313: INFO: Verifying statefulset ss doesn't scale past 0 for another 915.089364ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3463 Apr 28 00:58:04.317: INFO: Scaling statefulset ss to 0 Apr 28 00:58:04.327: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 28 00:58:04.330: INFO: Deleting all statefulset in ns statefulset-3463 Apr 28 00:58:04.333: INFO: Scaling statefulset ss to 0 Apr 28 00:58:04.341: INFO: Waiting for statefulset status.replicas updated to 0 Apr 28 00:58:04.343: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:58:04.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3463" for this suite. • [SLOW TEST:55.004 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":177,"skipped":2962,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:58:04.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-422fc00e-4e07-4ca1-87c6-29981e0b9d85 STEP: Creating a pod to test consume configMaps Apr 28 00:58:04.452: INFO: Waiting up to 5m0s for pod "pod-configmaps-6f8d9281-88c3-4840-ada8-e5bbfaeeb7dc" in namespace "configmap-8650" to be "Succeeded or Failed" Apr 28 00:58:04.495: INFO: Pod "pod-configmaps-6f8d9281-88c3-4840-ada8-e5bbfaeeb7dc": Phase="Pending", Reason="", readiness=false. Elapsed: 42.656158ms Apr 28 00:58:06.499: INFO: Pod "pod-configmaps-6f8d9281-88c3-4840-ada8-e5bbfaeeb7dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046648773s Apr 28 00:58:08.503: INFO: Pod "pod-configmaps-6f8d9281-88c3-4840-ada8-e5bbfaeeb7dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051092293s STEP: Saw pod success Apr 28 00:58:08.503: INFO: Pod "pod-configmaps-6f8d9281-88c3-4840-ada8-e5bbfaeeb7dc" satisfied condition "Succeeded or Failed" Apr 28 00:58:08.506: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-6f8d9281-88c3-4840-ada8-e5bbfaeeb7dc container configmap-volume-test: STEP: delete the pod Apr 28 00:58:08.534: INFO: Waiting for pod pod-configmaps-6f8d9281-88c3-4840-ada8-e5bbfaeeb7dc to disappear Apr 28 00:58:08.541: INFO: Pod pod-configmaps-6f8d9281-88c3-4840-ada8-e5bbfaeeb7dc no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:58:08.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8650" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":178,"skipped":2980,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:58:08.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-d5ca9429-1a88-44f0-a362-1702af55cb4d STEP: Creating a pod to test consume configMaps Apr 28 00:58:08.775: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bdc60582-a3d3-457a-af05-f79042310698" in namespace "projected-3355" to be "Succeeded or Failed" Apr 28 00:58:08.800: INFO: Pod "pod-projected-configmaps-bdc60582-a3d3-457a-af05-f79042310698": Phase="Pending", Reason="", readiness=false. Elapsed: 25.380241ms Apr 28 00:58:10.804: INFO: Pod "pod-projected-configmaps-bdc60582-a3d3-457a-af05-f79042310698": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029333792s Apr 28 00:58:12.807: INFO: Pod "pod-projected-configmaps-bdc60582-a3d3-457a-af05-f79042310698": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032555522s STEP: Saw pod success Apr 28 00:58:12.807: INFO: Pod "pod-projected-configmaps-bdc60582-a3d3-457a-af05-f79042310698" satisfied condition "Succeeded or Failed" Apr 28 00:58:12.810: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-bdc60582-a3d3-457a-af05-f79042310698 container projected-configmap-volume-test: STEP: delete the pod Apr 28 00:58:12.851: INFO: Waiting for pod pod-projected-configmaps-bdc60582-a3d3-457a-af05-f79042310698 to disappear Apr 28 00:58:12.862: INFO: Pod pod-projected-configmaps-bdc60582-a3d3-457a-af05-f79042310698 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:58:12.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3355" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":179,"skipped":3020,"failed":0} SSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:58:12.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 00:58:12.954: INFO: Waiting up to 5m0s for pod "busybox-user-65534-06a5677b-f16e-49f7-9306-e92b8428d802" in namespace "security-context-test-4084" to be "Succeeded or Failed" Apr 28 00:58:12.971: INFO: Pod "busybox-user-65534-06a5677b-f16e-49f7-9306-e92b8428d802": Phase="Pending", Reason="", readiness=false. Elapsed: 17.139643ms Apr 28 00:58:14.975: INFO: Pod "busybox-user-65534-06a5677b-f16e-49f7-9306-e92b8428d802": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020686992s Apr 28 00:58:16.980: INFO: Pod "busybox-user-65534-06a5677b-f16e-49f7-9306-e92b8428d802": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025404341s Apr 28 00:58:16.980: INFO: Pod "busybox-user-65534-06a5677b-f16e-49f7-9306-e92b8428d802" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:58:16.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4084" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":180,"skipped":3023,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:58:16.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 00:58:17.056: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:58:21.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2438" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":181,"skipped":3046,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:58:21.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:58:21.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5104" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":275,"completed":182,"skipped":3077,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:58:21.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 00:58:21.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Apr 28 00:58:22.008: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-28T00:58:22Z generation:1 name:name1 resourceVersion:11594405 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:0a1ec7ff-7a19-4726-b591-d5d0fde0e8aa] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Apr 28 00:58:32.014: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-28T00:58:32Z generation:1 name:name2 resourceVersion:11594458 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:d1b38457-0d14-47ab-aac6-6bf6f1be1bc6] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Apr 28 00:58:42.019: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-28T00:58:22Z generation:2 name:name1 resourceVersion:11594489 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:0a1ec7ff-7a19-4726-b591-d5d0fde0e8aa] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Apr 28 00:58:52.026: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-28T00:58:32Z generation:2 name:name2 resourceVersion:11594517 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:d1b38457-0d14-47ab-aac6-6bf6f1be1bc6] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Apr 28 00:59:02.034: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-28T00:58:22Z generation:2 name:name1 resourceVersion:11594548 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:0a1ec7ff-7a19-4726-b591-d5d0fde0e8aa] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Apr 28 00:59:12.042: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-28T00:58:32Z generation:2 name:name2 resourceVersion:11594582 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:d1b38457-0d14-47ab-aac6-6bf6f1be1bc6] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:59:22.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-3175" for this suite. • [SLOW TEST:61.315 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":183,"skipped":3113,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:59:22.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 00:59:22.653: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-29' Apr 28 00:59:23.006: INFO: stderr: "" Apr 28 00:59:23.006: INFO: stdout: "replicationcontroller/agnhost-master created\n" Apr 28 00:59:23.006: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-29' Apr 28 00:59:23.259: INFO: stderr: "" Apr 28 00:59:23.259: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 28 00:59:24.265: INFO: Selector matched 1 pods for map[app:agnhost] Apr 28 00:59:24.265: INFO: Found 0 / 1 Apr 28 00:59:25.263: INFO: Selector matched 1 pods for map[app:agnhost] Apr 28 00:59:25.263: INFO: Found 0 / 1 Apr 28 00:59:26.264: INFO: Selector matched 1 pods for map[app:agnhost] Apr 28 00:59:26.264: INFO: Found 0 / 1 Apr 28 00:59:27.264: INFO: Selector matched 1 pods for map[app:agnhost] Apr 28 00:59:27.264: INFO: Found 1 / 1 Apr 28 00:59:27.264: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 28 00:59:27.267: INFO: Selector matched 1 pods for map[app:agnhost] Apr 28 00:59:27.267: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 28 00:59:27.267: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe pod agnhost-master-njqkf --namespace=kubectl-29' Apr 28 00:59:27.383: INFO: stderr: "" Apr 28 00:59:27.383: INFO: stdout: "Name: agnhost-master-njqkf\nNamespace: kubectl-29\nPriority: 0\nNode: latest-worker/172.17.0.13\nStart Time: Tue, 28 Apr 2020 00:59:23 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.153\nIPs:\n IP: 10.244.2.153\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://962052db0f98b3f97210c78f4288225fd6816f88270c1c71959e5ac8a7a07c36\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 28 Apr 2020 00:59:25 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-xn2fp (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-xn2fp:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-xn2fp\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-29/agnhost-master-njqkf to latest-worker\n Normal Pulled 3s kubelet, latest-worker Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n Normal Created 2s kubelet, latest-worker Created container agnhost-master\n Normal Started 2s kubelet, latest-worker Started container agnhost-master\n" Apr 28 00:59:27.384: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-29' Apr 28 00:59:27.495: INFO: stderr: "" Apr 28 00:59:27.495: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-29\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-njqkf\n" Apr 28 00:59:27.496: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-29' Apr 28 00:59:27.602: INFO: stderr: "" Apr 28 00:59:27.602: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-29\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.51.247\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.153:6379\nSession Affinity: None\nEvents: \n" Apr 28 00:59:27.606: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe node latest-control-plane' Apr 28 00:59:27.724: INFO: stderr: "" Apr 28 00:59:27.724: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:27:32 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Tue, 28 Apr 2020 00:59:24 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 28 Apr 2020 00:55:36 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 28 Apr 2020 00:55:36 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 28 Apr 2020 00:55:36 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 28 Apr 2020 00:55:36 +0000 Sun, 15 Mar 2020 18:28:05 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 96fd1b5d260b433d8f617f455164eb5a\n System UUID: 611bedf3-8581-4e6e-a43b-01a437bb59ad\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.17.0\n Kube-Proxy Version: v1.17.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-f7wtl 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 43d\n kube-system coredns-6955765f44-lq4t7 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 43d\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 43d\n kube-system kindnet-sx5s7 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 43d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 43d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 43d\n kube-system kube-proxy-jpqvf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 43d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 43d\n local-path-storage local-path-provisioner-7745554f7f-fmsmz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 43d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Apr 28 00:59:27.724: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe namespace kubectl-29' Apr 28 00:59:27.832: INFO: stderr: "" Apr 28 00:59:27.832: INFO: stdout: "Name: kubectl-29\nLabels: e2e-framework=kubectl\n e2e-run=ef00971f-cbe2-484d-8239-0eb176cdbbbc\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:59:27.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-29" for this suite. • [SLOW TEST:5.247 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:978 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":275,"completed":184,"skipped":3119,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:59:27.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-projected-cxj5 STEP: Creating a pod to test atomic-volume-subpath Apr 28 00:59:27.953: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-cxj5" in namespace "subpath-4482" to be "Succeeded or Failed" Apr 28 00:59:27.959: INFO: Pod "pod-subpath-test-projected-cxj5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.317574ms Apr 28 00:59:30.029: INFO: Pod "pod-subpath-test-projected-cxj5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075812565s Apr 28 00:59:32.033: INFO: Pod "pod-subpath-test-projected-cxj5": Phase="Running", Reason="", readiness=true. Elapsed: 4.079832646s Apr 28 00:59:34.037: INFO: Pod "pod-subpath-test-projected-cxj5": Phase="Running", Reason="", readiness=true. Elapsed: 6.084117319s Apr 28 00:59:36.047: INFO: Pod "pod-subpath-test-projected-cxj5": Phase="Running", Reason="", readiness=true. Elapsed: 8.093628665s Apr 28 00:59:38.051: INFO: Pod "pod-subpath-test-projected-cxj5": Phase="Running", Reason="", readiness=true. Elapsed: 10.097790652s Apr 28 00:59:40.055: INFO: Pod "pod-subpath-test-projected-cxj5": Phase="Running", Reason="", readiness=true. Elapsed: 12.102113084s Apr 28 00:59:42.059: INFO: Pod "pod-subpath-test-projected-cxj5": Phase="Running", Reason="", readiness=true. Elapsed: 14.106200203s Apr 28 00:59:44.063: INFO: Pod "pod-subpath-test-projected-cxj5": Phase="Running", Reason="", readiness=true. Elapsed: 16.110153814s Apr 28 00:59:46.068: INFO: Pod "pod-subpath-test-projected-cxj5": Phase="Running", Reason="", readiness=true. Elapsed: 18.114407304s Apr 28 00:59:48.083: INFO: Pod "pod-subpath-test-projected-cxj5": Phase="Running", Reason="", readiness=true. Elapsed: 20.129791764s Apr 28 00:59:50.087: INFO: Pod "pod-subpath-test-projected-cxj5": Phase="Running", Reason="", readiness=true. Elapsed: 22.133751839s Apr 28 00:59:52.090: INFO: Pod "pod-subpath-test-projected-cxj5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.137140575s STEP: Saw pod success Apr 28 00:59:52.090: INFO: Pod "pod-subpath-test-projected-cxj5" satisfied condition "Succeeded or Failed" Apr 28 00:59:52.093: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-projected-cxj5 container test-container-subpath-projected-cxj5: STEP: delete the pod Apr 28 00:59:52.159: INFO: Waiting for pod pod-subpath-test-projected-cxj5 to disappear Apr 28 00:59:52.171: INFO: Pod pod-subpath-test-projected-cxj5 no longer exists STEP: Deleting pod pod-subpath-test-projected-cxj5 Apr 28 00:59:52.171: INFO: Deleting pod "pod-subpath-test-projected-cxj5" in namespace "subpath-4482" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:59:52.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4482" for this suite. • [SLOW TEST:24.342 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":185,"skipped":3143,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:59:52.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 00:59:56.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4530" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":186,"skipped":3148,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 00:59:57.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-8e5ec832-7493-40d0-a7ab-65de3b3ab5b7 STEP: Creating a pod to test consume configMaps Apr 28 00:59:57.184: INFO: Waiting up to 5m0s for pod "pod-configmaps-d62dc32a-0c86-43c8-b78a-8e1398a79326" in namespace "configmap-7410" to be "Succeeded or Failed" Apr 28 00:59:57.187: INFO: Pod "pod-configmaps-d62dc32a-0c86-43c8-b78a-8e1398a79326": Phase="Pending", Reason="", readiness=false. Elapsed: 2.697234ms Apr 28 00:59:59.191: INFO: Pod "pod-configmaps-d62dc32a-0c86-43c8-b78a-8e1398a79326": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006992659s Apr 28 01:00:01.196: INFO: Pod "pod-configmaps-d62dc32a-0c86-43c8-b78a-8e1398a79326": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011221692s STEP: Saw pod success Apr 28 01:00:01.196: INFO: Pod "pod-configmaps-d62dc32a-0c86-43c8-b78a-8e1398a79326" satisfied condition "Succeeded or Failed" Apr 28 01:00:01.199: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-d62dc32a-0c86-43c8-b78a-8e1398a79326 container configmap-volume-test: STEP: delete the pod Apr 28 01:00:01.264: INFO: Waiting for pod pod-configmaps-d62dc32a-0c86-43c8-b78a-8e1398a79326 to disappear Apr 28 01:00:01.270: INFO: Pod pod-configmaps-d62dc32a-0c86-43c8-b78a-8e1398a79326 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:00:01.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7410" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":187,"skipped":3207,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:00:01.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-aeaf5af8-c08c-4d4d-b4b0-719fb6170189 STEP: Creating a pod to test consume configMaps Apr 28 01:00:01.376: INFO: Waiting up to 5m0s for pod "pod-configmaps-1b23bec5-7acb-474c-b0c6-f0cca6738fc0" in namespace "configmap-1694" to be "Succeeded or Failed" Apr 28 01:00:01.379: INFO: Pod "pod-configmaps-1b23bec5-7acb-474c-b0c6-f0cca6738fc0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.457165ms Apr 28 01:00:03.413: INFO: Pod "pod-configmaps-1b23bec5-7acb-474c-b0c6-f0cca6738fc0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036209029s Apr 28 01:00:05.416: INFO: Pod "pod-configmaps-1b23bec5-7acb-474c-b0c6-f0cca6738fc0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039769248s STEP: Saw pod success Apr 28 01:00:05.416: INFO: Pod "pod-configmaps-1b23bec5-7acb-474c-b0c6-f0cca6738fc0" satisfied condition "Succeeded or Failed" Apr 28 01:00:05.419: INFO: Trying to get logs from node latest-worker pod pod-configmaps-1b23bec5-7acb-474c-b0c6-f0cca6738fc0 container configmap-volume-test: STEP: delete the pod Apr 28 01:00:05.434: INFO: Waiting for pod pod-configmaps-1b23bec5-7acb-474c-b0c6-f0cca6738fc0 to disappear Apr 28 01:00:05.438: INFO: Pod pod-configmaps-1b23bec5-7acb-474c-b0c6-f0cca6738fc0 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:00:05.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1694" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":188,"skipped":3213,"failed":0} SSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:00:05.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service nodeport-test with type=NodePort in namespace services-9294 STEP: creating replication controller nodeport-test in namespace services-9294 I0428 01:00:05.563810 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-9294, replica count: 2 I0428 01:00:08.614282 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0428 01:00:11.614525 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 28 01:00:11.614: INFO: Creating new exec pod Apr 28 01:00:16.647: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-9294 execpodgw72p -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Apr 28 01:00:16.902: INFO: stderr: "I0428 01:00:16.792231 2706 log.go:172] (0xc000a1a6e0) (0xc00086e0a0) Create stream\nI0428 01:00:16.792315 2706 log.go:172] (0xc000a1a6e0) (0xc00086e0a0) Stream added, broadcasting: 1\nI0428 01:00:16.795235 2706 log.go:172] (0xc000a1a6e0) Reply frame received for 1\nI0428 01:00:16.795291 2706 log.go:172] (0xc000a1a6e0) (0xc000956000) Create stream\nI0428 01:00:16.795307 2706 log.go:172] (0xc000a1a6e0) (0xc000956000) Stream added, broadcasting: 3\nI0428 01:00:16.796297 2706 log.go:172] (0xc000a1a6e0) Reply frame received for 3\nI0428 01:00:16.796358 2706 log.go:172] (0xc000a1a6e0) (0xc00086e140) Create stream\nI0428 01:00:16.796375 2706 log.go:172] (0xc000a1a6e0) (0xc00086e140) Stream added, broadcasting: 5\nI0428 01:00:16.797354 2706 log.go:172] (0xc000a1a6e0) Reply frame received for 5\nI0428 01:00:16.893443 2706 log.go:172] (0xc000a1a6e0) Data frame received for 5\nI0428 01:00:16.893489 2706 log.go:172] (0xc00086e140) (5) Data frame handling\nI0428 01:00:16.893533 2706 log.go:172] (0xc00086e140) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0428 01:00:16.893633 2706 log.go:172] (0xc000a1a6e0) Data frame received for 5\nI0428 01:00:16.893651 2706 log.go:172] (0xc00086e140) (5) Data frame handling\nI0428 01:00:16.893668 2706 log.go:172] (0xc00086e140) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0428 01:00:16.894356 2706 log.go:172] (0xc000a1a6e0) Data frame received for 3\nI0428 01:00:16.894391 2706 log.go:172] (0xc000956000) (3) Data frame handling\nI0428 01:00:16.894508 2706 log.go:172] (0xc000a1a6e0) Data frame received for 5\nI0428 01:00:16.894539 2706 log.go:172] (0xc00086e140) (5) Data frame handling\nI0428 01:00:16.896387 2706 log.go:172] (0xc000a1a6e0) Data frame received for 1\nI0428 01:00:16.896419 2706 log.go:172] (0xc00086e0a0) (1) Data frame handling\nI0428 01:00:16.896434 2706 log.go:172] (0xc00086e0a0) (1) Data frame sent\nI0428 01:00:16.896474 2706 log.go:172] (0xc000a1a6e0) (0xc00086e0a0) Stream removed, broadcasting: 1\nI0428 01:00:16.896499 2706 log.go:172] (0xc000a1a6e0) Go away received\nI0428 01:00:16.896939 2706 log.go:172] (0xc000a1a6e0) (0xc00086e0a0) Stream removed, broadcasting: 1\nI0428 01:00:16.896960 2706 log.go:172] (0xc000a1a6e0) (0xc000956000) Stream removed, broadcasting: 3\nI0428 01:00:16.896972 2706 log.go:172] (0xc000a1a6e0) (0xc00086e140) Stream removed, broadcasting: 5\n" Apr 28 01:00:16.902: INFO: stdout: "" Apr 28 01:00:16.903: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-9294 execpodgw72p -- /bin/sh -x -c nc -zv -t -w 2 10.96.250.213 80' Apr 28 01:00:17.105: INFO: stderr: "I0428 01:00:17.012823 2728 log.go:172] (0xc000a0afd0) (0xc000a028c0) Create stream\nI0428 01:00:17.012886 2728 log.go:172] (0xc000a0afd0) (0xc000a028c0) Stream added, broadcasting: 1\nI0428 01:00:17.017952 2728 log.go:172] (0xc000a0afd0) Reply frame received for 1\nI0428 01:00:17.017996 2728 log.go:172] (0xc000a0afd0) (0xc0005cb900) Create stream\nI0428 01:00:17.018007 2728 log.go:172] (0xc000a0afd0) (0xc0005cb900) Stream added, broadcasting: 3\nI0428 01:00:17.018989 2728 log.go:172] (0xc000a0afd0) Reply frame received for 3\nI0428 01:00:17.019053 2728 log.go:172] (0xc000a0afd0) (0xc000290be0) Create stream\nI0428 01:00:17.019072 2728 log.go:172] (0xc000a0afd0) (0xc000290be0) Stream added, broadcasting: 5\nI0428 01:00:17.020067 2728 log.go:172] (0xc000a0afd0) Reply frame received for 5\nI0428 01:00:17.096821 2728 log.go:172] (0xc000a0afd0) Data frame received for 3\nI0428 01:00:17.096863 2728 log.go:172] (0xc0005cb900) (3) Data frame handling\nI0428 01:00:17.096920 2728 log.go:172] (0xc000a0afd0) Data frame received for 5\nI0428 01:00:17.096936 2728 log.go:172] (0xc000290be0) (5) Data frame handling\nI0428 01:00:17.096962 2728 log.go:172] (0xc000290be0) (5) Data frame sent\nI0428 01:00:17.096979 2728 log.go:172] (0xc000a0afd0) Data frame received for 5\nI0428 01:00:17.096998 2728 log.go:172] (0xc000290be0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.250.213 80\nConnection to 10.96.250.213 80 port [tcp/http] succeeded!\nI0428 01:00:17.100642 2728 log.go:172] (0xc000a0afd0) Data frame received for 1\nI0428 01:00:17.100657 2728 log.go:172] (0xc000a028c0) (1) Data frame handling\nI0428 01:00:17.100674 2728 log.go:172] (0xc000a028c0) (1) Data frame sent\nI0428 01:00:17.100689 2728 log.go:172] (0xc000a0afd0) (0xc000a028c0) Stream removed, broadcasting: 1\nI0428 01:00:17.100705 2728 log.go:172] (0xc000a0afd0) Go away received\nI0428 01:00:17.101056 2728 log.go:172] (0xc000a0afd0) (0xc000a028c0) Stream removed, broadcasting: 1\nI0428 01:00:17.101074 2728 log.go:172] (0xc000a0afd0) (0xc0005cb900) Stream removed, broadcasting: 3\nI0428 01:00:17.101086 2728 log.go:172] (0xc000a0afd0) (0xc000290be0) Stream removed, broadcasting: 5\n" Apr 28 01:00:17.105: INFO: stdout: "" Apr 28 01:00:17.106: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-9294 execpodgw72p -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 32138' Apr 28 01:00:17.309: INFO: stderr: "I0428 01:00:17.237076 2748 log.go:172] (0xc0009b0370) (0xc000b8a0a0) Create stream\nI0428 01:00:17.237228 2748 log.go:172] (0xc0009b0370) (0xc000b8a0a0) Stream added, broadcasting: 1\nI0428 01:00:17.240226 2748 log.go:172] (0xc0009b0370) Reply frame received for 1\nI0428 01:00:17.240275 2748 log.go:172] (0xc0009b0370) (0xc000b8a140) Create stream\nI0428 01:00:17.240291 2748 log.go:172] (0xc0009b0370) (0xc000b8a140) Stream added, broadcasting: 3\nI0428 01:00:17.241603 2748 log.go:172] (0xc0009b0370) Reply frame received for 3\nI0428 01:00:17.241659 2748 log.go:172] (0xc0009b0370) (0xc000a42000) Create stream\nI0428 01:00:17.241675 2748 log.go:172] (0xc0009b0370) (0xc000a42000) Stream added, broadcasting: 5\nI0428 01:00:17.242857 2748 log.go:172] (0xc0009b0370) Reply frame received for 5\nI0428 01:00:17.302318 2748 log.go:172] (0xc0009b0370) Data frame received for 5\nI0428 01:00:17.302356 2748 log.go:172] (0xc000a42000) (5) Data frame handling\nI0428 01:00:17.302399 2748 log.go:172] (0xc000a42000) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 32138\nConnection to 172.17.0.13 32138 port [tcp/32138] succeeded!\nI0428 01:00:17.302497 2748 log.go:172] (0xc0009b0370) Data frame received for 3\nI0428 01:00:17.302529 2748 log.go:172] (0xc000b8a140) (3) Data frame handling\nI0428 01:00:17.302860 2748 log.go:172] (0xc0009b0370) Data frame received for 5\nI0428 01:00:17.302884 2748 log.go:172] (0xc000a42000) (5) Data frame handling\nI0428 01:00:17.304331 2748 log.go:172] (0xc0009b0370) Data frame received for 1\nI0428 01:00:17.304361 2748 log.go:172] (0xc000b8a0a0) (1) Data frame handling\nI0428 01:00:17.304380 2748 log.go:172] (0xc000b8a0a0) (1) Data frame sent\nI0428 01:00:17.304671 2748 log.go:172] (0xc0009b0370) (0xc000b8a0a0) Stream removed, broadcasting: 1\nI0428 01:00:17.304730 2748 log.go:172] (0xc0009b0370) Go away received\nI0428 01:00:17.305088 2748 log.go:172] (0xc0009b0370) (0xc000b8a0a0) Stream removed, broadcasting: 1\nI0428 01:00:17.305106 2748 log.go:172] (0xc0009b0370) (0xc000b8a140) Stream removed, broadcasting: 3\nI0428 01:00:17.305257 2748 log.go:172] (0xc0009b0370) (0xc000a42000) Stream removed, broadcasting: 5\n" Apr 28 01:00:17.309: INFO: stdout: "" Apr 28 01:00:17.309: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-9294 execpodgw72p -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 32138' Apr 28 01:00:17.515: INFO: stderr: "I0428 01:00:17.440514 2768 log.go:172] (0xc000928000) (0xc0009d4000) Create stream\nI0428 01:00:17.440592 2768 log.go:172] (0xc000928000) (0xc0009d4000) Stream added, broadcasting: 1\nI0428 01:00:17.443958 2768 log.go:172] (0xc000928000) Reply frame received for 1\nI0428 01:00:17.443999 2768 log.go:172] (0xc000928000) (0xc00090c000) Create stream\nI0428 01:00:17.444014 2768 log.go:172] (0xc000928000) (0xc00090c000) Stream added, broadcasting: 3\nI0428 01:00:17.444865 2768 log.go:172] (0xc000928000) Reply frame received for 3\nI0428 01:00:17.444901 2768 log.go:172] (0xc000928000) (0xc0004ca000) Create stream\nI0428 01:00:17.444911 2768 log.go:172] (0xc000928000) (0xc0004ca000) Stream added, broadcasting: 5\nI0428 01:00:17.445863 2768 log.go:172] (0xc000928000) Reply frame received for 5\nI0428 01:00:17.508637 2768 log.go:172] (0xc000928000) Data frame received for 5\nI0428 01:00:17.508675 2768 log.go:172] (0xc0004ca000) (5) Data frame handling\nI0428 01:00:17.508720 2768 log.go:172] (0xc0004ca000) (5) Data frame sent\nI0428 01:00:17.508736 2768 log.go:172] (0xc000928000) Data frame received for 5\nI0428 01:00:17.508747 2768 log.go:172] (0xc0004ca000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 32138\nConnection to 172.17.0.12 32138 port [tcp/32138] succeeded!\nI0428 01:00:17.508784 2768 log.go:172] (0xc0004ca000) (5) Data frame sent\nI0428 01:00:17.509049 2768 log.go:172] (0xc000928000) Data frame received for 5\nI0428 01:00:17.509066 2768 log.go:172] (0xc0004ca000) (5) Data frame handling\nI0428 01:00:17.509103 2768 log.go:172] (0xc000928000) Data frame received for 3\nI0428 01:00:17.509236 2768 log.go:172] (0xc00090c000) (3) Data frame handling\nI0428 01:00:17.510939 2768 log.go:172] (0xc000928000) Data frame received for 1\nI0428 01:00:17.510962 2768 log.go:172] (0xc0009d4000) (1) Data frame handling\nI0428 01:00:17.510973 2768 log.go:172] (0xc0009d4000) (1) Data frame sent\nI0428 01:00:17.510984 2768 log.go:172] (0xc000928000) (0xc0009d4000) Stream removed, broadcasting: 1\nI0428 01:00:17.511070 2768 log.go:172] (0xc000928000) Go away received\nI0428 01:00:17.511274 2768 log.go:172] (0xc000928000) (0xc0009d4000) Stream removed, broadcasting: 1\nI0428 01:00:17.511289 2768 log.go:172] (0xc000928000) (0xc00090c000) Stream removed, broadcasting: 3\nI0428 01:00:17.511296 2768 log.go:172] (0xc000928000) (0xc0004ca000) Stream removed, broadcasting: 5\n" Apr 28 01:00:17.515: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:00:17.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9294" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:12.077 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":189,"skipped":3216,"failed":0} SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:00:17.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-5560 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 28 01:00:17.605: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 28 01:00:17.664: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 28 01:00:19.701: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 28 01:00:21.668: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 28 01:00:23.669: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 28 01:00:25.668: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 28 01:00:27.669: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 28 01:00:29.669: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 28 01:00:31.668: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 28 01:00:31.675: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 28 01:00:33.679: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 28 01:00:35.679: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 28 01:00:39.738: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.157 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5560 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 01:00:39.738: INFO: >>> kubeConfig: /root/.kube/config I0428 01:00:39.766120 7 log.go:172] (0xc00301e420) (0xc001717cc0) Create stream I0428 01:00:39.766147 7 log.go:172] (0xc00301e420) (0xc001717cc0) Stream added, broadcasting: 1 I0428 01:00:39.768441 7 log.go:172] (0xc00301e420) Reply frame received for 1 I0428 01:00:39.768485 7 log.go:172] (0xc00301e420) (0xc001735400) Create stream I0428 01:00:39.768502 7 log.go:172] (0xc00301e420) (0xc001735400) Stream added, broadcasting: 3 I0428 01:00:39.769671 7 log.go:172] (0xc00301e420) Reply frame received for 3 I0428 01:00:39.769722 7 log.go:172] (0xc00301e420) (0xc002a1f720) Create stream I0428 01:00:39.769737 7 log.go:172] (0xc00301e420) (0xc002a1f720) Stream added, broadcasting: 5 I0428 01:00:39.770900 7 log.go:172] (0xc00301e420) Reply frame received for 5 I0428 01:00:40.850078 7 log.go:172] (0xc00301e420) Data frame received for 5 I0428 01:00:40.850134 7 log.go:172] (0xc002a1f720) (5) Data frame handling I0428 01:00:40.850167 7 log.go:172] (0xc00301e420) Data frame received for 3 I0428 01:00:40.850190 7 log.go:172] (0xc001735400) (3) Data frame handling I0428 01:00:40.850228 7 log.go:172] (0xc001735400) (3) Data frame sent I0428 01:00:40.850251 7 log.go:172] (0xc00301e420) Data frame received for 3 I0428 01:00:40.850269 7 log.go:172] (0xc001735400) (3) Data frame handling I0428 01:00:40.852523 7 log.go:172] (0xc00301e420) Data frame received for 1 I0428 01:00:40.852549 7 log.go:172] (0xc001717cc0) (1) Data frame handling I0428 01:00:40.852583 7 log.go:172] (0xc001717cc0) (1) Data frame sent I0428 01:00:40.852663 7 log.go:172] (0xc00301e420) (0xc001717cc0) Stream removed, broadcasting: 1 I0428 01:00:40.852699 7 log.go:172] (0xc00301e420) Go away received I0428 01:00:40.852881 7 log.go:172] (0xc00301e420) (0xc001717cc0) Stream removed, broadcasting: 1 I0428 01:00:40.852921 7 log.go:172] (0xc00301e420) (0xc001735400) Stream removed, broadcasting: 3 I0428 01:00:40.852942 7 log.go:172] (0xc00301e420) (0xc002a1f720) Stream removed, broadcasting: 5 Apr 28 01:00:40.852: INFO: Found all expected endpoints: [netserver-0] Apr 28 01:00:40.856: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.158 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5560 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 01:00:40.856: INFO: >>> kubeConfig: /root/.kube/config I0428 01:00:40.891507 7 log.go:172] (0xc002f72dc0) (0xc001735a40) Create stream I0428 01:00:40.891530 7 log.go:172] (0xc002f72dc0) (0xc001735a40) Stream added, broadcasting: 1 I0428 01:00:40.893905 7 log.go:172] (0xc002f72dc0) Reply frame received for 1 I0428 01:00:40.893955 7 log.go:172] (0xc002f72dc0) (0xc001717d60) Create stream I0428 01:00:40.893978 7 log.go:172] (0xc002f72dc0) (0xc001717d60) Stream added, broadcasting: 3 I0428 01:00:40.895146 7 log.go:172] (0xc002f72dc0) Reply frame received for 3 I0428 01:00:40.895189 7 log.go:172] (0xc002f72dc0) (0xc002a1f7c0) Create stream I0428 01:00:40.895205 7 log.go:172] (0xc002f72dc0) (0xc002a1f7c0) Stream added, broadcasting: 5 I0428 01:00:40.896340 7 log.go:172] (0xc002f72dc0) Reply frame received for 5 I0428 01:00:41.974586 7 log.go:172] (0xc002f72dc0) Data frame received for 3 I0428 01:00:41.974634 7 log.go:172] (0xc001717d60) (3) Data frame handling I0428 01:00:41.974659 7 log.go:172] (0xc001717d60) (3) Data frame sent I0428 01:00:41.974787 7 log.go:172] (0xc002f72dc0) Data frame received for 3 I0428 01:00:41.974836 7 log.go:172] (0xc001717d60) (3) Data frame handling I0428 01:00:41.974851 7 log.go:172] (0xc002f72dc0) Data frame received for 5 I0428 01:00:41.974867 7 log.go:172] (0xc002a1f7c0) (5) Data frame handling I0428 01:00:41.976743 7 log.go:172] (0xc002f72dc0) Data frame received for 1 I0428 01:00:41.976782 7 log.go:172] (0xc001735a40) (1) Data frame handling I0428 01:00:41.976813 7 log.go:172] (0xc001735a40) (1) Data frame sent I0428 01:00:41.976846 7 log.go:172] (0xc002f72dc0) (0xc001735a40) Stream removed, broadcasting: 1 I0428 01:00:41.976918 7 log.go:172] (0xc002f72dc0) Go away received I0428 01:00:41.976954 7 log.go:172] (0xc002f72dc0) (0xc001735a40) Stream removed, broadcasting: 1 I0428 01:00:41.977009 7 log.go:172] (0xc002f72dc0) (0xc001717d60) Stream removed, broadcasting: 3 I0428 01:00:41.977042 7 log.go:172] (0xc002f72dc0) (0xc002a1f7c0) Stream removed, broadcasting: 5 Apr 28 01:00:41.977: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:00:41.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5560" for this suite. • [SLOW TEST:24.463 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":190,"skipped":3223,"failed":0} SSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:00:41.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token STEP: reading a file in the container Apr 28 01:00:46.587: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9516 pod-service-account-4e520396-8e6c-4ec3-8144-cd252c2b0430 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 28 01:00:46.809: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9516 pod-service-account-4e520396-8e6c-4ec3-8144-cd252c2b0430 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 28 01:00:46.996: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9516 pod-service-account-4e520396-8e6c-4ec3-8144-cd252c2b0430 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:00:47.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9516" for this suite. • [SLOW TEST:5.243 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":275,"completed":191,"skipped":3233,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:00:47.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 28 01:00:47.308: INFO: Waiting up to 5m0s for pod "pod-46927349-5c3e-48b9-8613-d9c9af551d53" in namespace "emptydir-2783" to be "Succeeded or Failed" Apr 28 01:00:47.359: INFO: Pod "pod-46927349-5c3e-48b9-8613-d9c9af551d53": Phase="Pending", Reason="", readiness=false. Elapsed: 51.374663ms Apr 28 01:00:49.368: INFO: Pod "pod-46927349-5c3e-48b9-8613-d9c9af551d53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060035893s Apr 28 01:00:51.373: INFO: Pod "pod-46927349-5c3e-48b9-8613-d9c9af551d53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064613378s STEP: Saw pod success Apr 28 01:00:51.373: INFO: Pod "pod-46927349-5c3e-48b9-8613-d9c9af551d53" satisfied condition "Succeeded or Failed" Apr 28 01:00:51.376: INFO: Trying to get logs from node latest-worker pod pod-46927349-5c3e-48b9-8613-d9c9af551d53 container test-container: STEP: delete the pod Apr 28 01:00:51.411: INFO: Waiting for pod pod-46927349-5c3e-48b9-8613-d9c9af551d53 to disappear Apr 28 01:00:51.416: INFO: Pod pod-46927349-5c3e-48b9-8613-d9c9af551d53 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:00:51.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2783" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":192,"skipped":3292,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:00:51.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-a4928c4b-e47d-4cec-906f-dfe66c34e1b1 STEP: Creating configMap with name cm-test-opt-upd-530f97f3-3020-4b53-a868-3061258587a5 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-a4928c4b-e47d-4cec-906f-dfe66c34e1b1 STEP: Updating configmap cm-test-opt-upd-530f97f3-3020-4b53-a868-3061258587a5 STEP: Creating configMap with name cm-test-opt-create-19ecdb9a-410b-4eb4-9ec9-4211c5fc2c63 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:00:59.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4560" for this suite. • [SLOW TEST:8.248 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":193,"skipped":3312,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:00:59.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 28 01:00:59.771: INFO: Waiting up to 5m0s for pod "pod-222f5f80-5c0d-45c8-aabd-464264c8115f" in namespace "emptydir-5105" to be "Succeeded or Failed" Apr 28 01:00:59.778: INFO: Pod "pod-222f5f80-5c0d-45c8-aabd-464264c8115f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.01043ms Apr 28 01:01:01.863: INFO: Pod "pod-222f5f80-5c0d-45c8-aabd-464264c8115f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091329634s Apr 28 01:01:03.867: INFO: Pod "pod-222f5f80-5c0d-45c8-aabd-464264c8115f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095609302s STEP: Saw pod success Apr 28 01:01:03.867: INFO: Pod "pod-222f5f80-5c0d-45c8-aabd-464264c8115f" satisfied condition "Succeeded or Failed" Apr 28 01:01:03.870: INFO: Trying to get logs from node latest-worker2 pod pod-222f5f80-5c0d-45c8-aabd-464264c8115f container test-container: STEP: delete the pod Apr 28 01:01:03.888: INFO: Waiting for pod pod-222f5f80-5c0d-45c8-aabd-464264c8115f to disappear Apr 28 01:01:03.892: INFO: Pod pod-222f5f80-5c0d-45c8-aabd-464264c8115f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:01:03.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5105" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":194,"skipped":3320,"failed":0} ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:01:03.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 28 01:01:03.991: INFO: Waiting up to 5m0s for pod "pod-ab2f62cb-3229-4bb4-8cdc-f69af101e2c5" in namespace "emptydir-9088" to be "Succeeded or Failed" Apr 28 01:01:04.006: INFO: Pod "pod-ab2f62cb-3229-4bb4-8cdc-f69af101e2c5": Phase="Pending", Reason="", readiness=false. Elapsed: 15.0422ms Apr 28 01:01:06.009: INFO: Pod "pod-ab2f62cb-3229-4bb4-8cdc-f69af101e2c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018283812s Apr 28 01:01:08.014: INFO: Pod "pod-ab2f62cb-3229-4bb4-8cdc-f69af101e2c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022363946s STEP: Saw pod success Apr 28 01:01:08.014: INFO: Pod "pod-ab2f62cb-3229-4bb4-8cdc-f69af101e2c5" satisfied condition "Succeeded or Failed" Apr 28 01:01:08.017: INFO: Trying to get logs from node latest-worker2 pod pod-ab2f62cb-3229-4bb4-8cdc-f69af101e2c5 container test-container: STEP: delete the pod Apr 28 01:01:08.115: INFO: Waiting for pod pod-ab2f62cb-3229-4bb4-8cdc-f69af101e2c5 to disappear Apr 28 01:01:08.118: INFO: Pod pod-ab2f62cb-3229-4bb4-8cdc-f69af101e2c5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:01:08.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9088" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":195,"skipped":3320,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:01:08.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override all Apr 28 01:01:08.233: INFO: Waiting up to 5m0s for pod "client-containers-157e210c-6ee1-498f-8ab7-8e7fcac17e48" in namespace "containers-3012" to be "Succeeded or Failed" Apr 28 01:01:08.236: INFO: Pod "client-containers-157e210c-6ee1-498f-8ab7-8e7fcac17e48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.465439ms Apr 28 01:01:10.240: INFO: Pod "client-containers-157e210c-6ee1-498f-8ab7-8e7fcac17e48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00658858s Apr 28 01:01:12.245: INFO: Pod "client-containers-157e210c-6ee1-498f-8ab7-8e7fcac17e48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011357823s STEP: Saw pod success Apr 28 01:01:12.245: INFO: Pod "client-containers-157e210c-6ee1-498f-8ab7-8e7fcac17e48" satisfied condition "Succeeded or Failed" Apr 28 01:01:12.248: INFO: Trying to get logs from node latest-worker2 pod client-containers-157e210c-6ee1-498f-8ab7-8e7fcac17e48 container test-container: STEP: delete the pod Apr 28 01:01:12.343: INFO: Waiting for pod client-containers-157e210c-6ee1-498f-8ab7-8e7fcac17e48 to disappear Apr 28 01:01:12.350: INFO: Pod client-containers-157e210c-6ee1-498f-8ab7-8e7fcac17e48 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:01:12.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3012" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":196,"skipped":3330,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:01:12.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 28 01:01:12.469: INFO: Waiting up to 5m0s for pod "pod-269d4bca-e0b2-4adc-808d-c90ef98b20e5" in namespace "emptydir-5542" to be "Succeeded or Failed" Apr 28 01:01:12.476: INFO: Pod "pod-269d4bca-e0b2-4adc-808d-c90ef98b20e5": Phase="Pending", Reason="", readiness=false. Elapsed: 7.024509ms Apr 28 01:01:14.479: INFO: Pod "pod-269d4bca-e0b2-4adc-808d-c90ef98b20e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009863216s Apr 28 01:01:16.483: INFO: Pod "pod-269d4bca-e0b2-4adc-808d-c90ef98b20e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013946792s STEP: Saw pod success Apr 28 01:01:16.483: INFO: Pod "pod-269d4bca-e0b2-4adc-808d-c90ef98b20e5" satisfied condition "Succeeded or Failed" Apr 28 01:01:16.486: INFO: Trying to get logs from node latest-worker2 pod pod-269d4bca-e0b2-4adc-808d-c90ef98b20e5 container test-container: STEP: delete the pod Apr 28 01:01:16.504: INFO: Waiting for pod pod-269d4bca-e0b2-4adc-808d-c90ef98b20e5 to disappear Apr 28 01:01:16.527: INFO: Pod pod-269d4bca-e0b2-4adc-808d-c90ef98b20e5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:01:16.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5542" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":197,"skipped":3334,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:01:16.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 28 01:01:16.602: INFO: Waiting up to 5m0s for pod "pod-2f88e9b5-a4dd-454f-8d0c-4078a232348d" in namespace "emptydir-2396" to be "Succeeded or Failed" Apr 28 01:01:16.618: INFO: Pod "pod-2f88e9b5-a4dd-454f-8d0c-4078a232348d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.906627ms Apr 28 01:01:18.622: INFO: Pod "pod-2f88e9b5-a4dd-454f-8d0c-4078a232348d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020206802s Apr 28 01:01:20.626: INFO: Pod "pod-2f88e9b5-a4dd-454f-8d0c-4078a232348d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024529183s STEP: Saw pod success Apr 28 01:01:20.626: INFO: Pod "pod-2f88e9b5-a4dd-454f-8d0c-4078a232348d" satisfied condition "Succeeded or Failed" Apr 28 01:01:20.630: INFO: Trying to get logs from node latest-worker pod pod-2f88e9b5-a4dd-454f-8d0c-4078a232348d container test-container: STEP: delete the pod Apr 28 01:01:20.660: INFO: Waiting for pod pod-2f88e9b5-a4dd-454f-8d0c-4078a232348d to disappear Apr 28 01:01:20.671: INFO: Pod pod-2f88e9b5-a4dd-454f-8d0c-4078a232348d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:01:20.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2396" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":198,"skipped":3339,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:01:20.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:01:37.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1824" for this suite. • [SLOW TEST:17.099 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":199,"skipped":3345,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:01:37.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:01:44.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8275" for this suite. STEP: Destroying namespace "nsdeletetest-3752" for this suite. Apr 28 01:01:44.033: INFO: Namespace nsdeletetest-3752 was already deleted STEP: Destroying namespace "nsdeletetest-7767" for this suite. • [SLOW TEST:6.237 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":200,"skipped":3360,"failed":0} [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:01:44.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-e29a5c40-b1fe-4bb4-adc0-1337ff7d5886 STEP: Creating a pod to test consume secrets Apr 28 01:01:44.117: INFO: Waiting up to 5m0s for pod "pod-secrets-48bff4d2-1858-4eed-b1c8-5c3e44876564" in namespace "secrets-7037" to be "Succeeded or Failed" Apr 28 01:01:44.120: INFO: Pod "pod-secrets-48bff4d2-1858-4eed-b1c8-5c3e44876564": Phase="Pending", Reason="", readiness=false. Elapsed: 3.322374ms Apr 28 01:01:46.124: INFO: Pod "pod-secrets-48bff4d2-1858-4eed-b1c8-5c3e44876564": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007018197s Apr 28 01:01:48.128: INFO: Pod "pod-secrets-48bff4d2-1858-4eed-b1c8-5c3e44876564": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010907141s STEP: Saw pod success Apr 28 01:01:48.128: INFO: Pod "pod-secrets-48bff4d2-1858-4eed-b1c8-5c3e44876564" satisfied condition "Succeeded or Failed" Apr 28 01:01:48.131: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-48bff4d2-1858-4eed-b1c8-5c3e44876564 container secret-volume-test: STEP: delete the pod Apr 28 01:01:48.159: INFO: Waiting for pod pod-secrets-48bff4d2-1858-4eed-b1c8-5c3e44876564 to disappear Apr 28 01:01:48.174: INFO: Pod pod-secrets-48bff4d2-1858-4eed-b1c8-5c3e44876564 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:01:48.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7037" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":201,"skipped":3360,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:01:48.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:01:48.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-4398" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":202,"skipped":3377,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:01:48.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating server pod server in namespace prestop-1200 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-1200 STEP: Deleting pre-stop pod Apr 28 01:02:01.513: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:02:01.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-1200" for this suite. • [SLOW TEST:13.165 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":275,"completed":203,"skipped":3386,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:02:01.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service multi-endpoint-test in namespace services-7526 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7526 to expose endpoints map[] Apr 28 01:02:01.651: INFO: Get endpoints failed (5.449194ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Apr 28 01:02:02.655: INFO: successfully validated that service multi-endpoint-test in namespace services-7526 exposes endpoints map[] (1.009875836s elapsed) STEP: Creating pod pod1 in namespace services-7526 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7526 to expose endpoints map[pod1:[100]] Apr 28 01:02:06.729: INFO: successfully validated that service multi-endpoint-test in namespace services-7526 exposes endpoints map[pod1:[100]] (4.066771194s elapsed) STEP: Creating pod pod2 in namespace services-7526 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7526 to expose endpoints map[pod1:[100] pod2:[101]] Apr 28 01:02:10.830: INFO: successfully validated that service multi-endpoint-test in namespace services-7526 exposes endpoints map[pod1:[100] pod2:[101]] (4.095433201s elapsed) STEP: Deleting pod pod1 in namespace services-7526 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7526 to expose endpoints map[pod2:[101]] Apr 28 01:02:11.890: INFO: successfully validated that service multi-endpoint-test in namespace services-7526 exposes endpoints map[pod2:[101]] (1.056371074s elapsed) STEP: Deleting pod pod2 in namespace services-7526 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7526 to expose endpoints map[] Apr 28 01:02:11.932: INFO: successfully validated that service multi-endpoint-test in namespace services-7526 exposes endpoints map[] (36.174835ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:02:12.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7526" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:10.607 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":275,"completed":204,"skipped":3400,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:02:12.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 28 01:02:12.317: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d1693110-2e5f-44cc-8684-cccc4a944556" in namespace "downward-api-978" to be "Succeeded or Failed" Apr 28 01:02:12.337: INFO: Pod "downwardapi-volume-d1693110-2e5f-44cc-8684-cccc4a944556": Phase="Pending", Reason="", readiness=false. Elapsed: 19.863666ms Apr 28 01:02:14.341: INFO: Pod "downwardapi-volume-d1693110-2e5f-44cc-8684-cccc4a944556": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023977543s Apr 28 01:02:16.345: INFO: Pod "downwardapi-volume-d1693110-2e5f-44cc-8684-cccc4a944556": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028187448s STEP: Saw pod success Apr 28 01:02:16.345: INFO: Pod "downwardapi-volume-d1693110-2e5f-44cc-8684-cccc4a944556" satisfied condition "Succeeded or Failed" Apr 28 01:02:16.348: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-d1693110-2e5f-44cc-8684-cccc4a944556 container client-container: STEP: delete the pod Apr 28 01:02:16.369: INFO: Waiting for pod downwardapi-volume-d1693110-2e5f-44cc-8684-cccc4a944556 to disappear Apr 28 01:02:16.373: INFO: Pod downwardapi-volume-d1693110-2e5f-44cc-8684-cccc4a944556 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:02:16.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-978" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":205,"skipped":3464,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:02:16.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 28 01:02:16.435: INFO: Waiting up to 5m0s for pod "pod-369dd606-d782-42d8-9a44-87e1c7285ab5" in namespace "emptydir-9047" to be "Succeeded or Failed" Apr 28 01:02:16.445: INFO: Pod "pod-369dd606-d782-42d8-9a44-87e1c7285ab5": Phase="Pending", Reason="", readiness=false. Elapsed: 9.493665ms Apr 28 01:02:18.449: INFO: Pod "pod-369dd606-d782-42d8-9a44-87e1c7285ab5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013402307s Apr 28 01:02:20.454: INFO: Pod "pod-369dd606-d782-42d8-9a44-87e1c7285ab5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018072482s STEP: Saw pod success Apr 28 01:02:20.454: INFO: Pod "pod-369dd606-d782-42d8-9a44-87e1c7285ab5" satisfied condition "Succeeded or Failed" Apr 28 01:02:20.456: INFO: Trying to get logs from node latest-worker pod pod-369dd606-d782-42d8-9a44-87e1c7285ab5 container test-container: STEP: delete the pod Apr 28 01:02:20.473: INFO: Waiting for pod pod-369dd606-d782-42d8-9a44-87e1c7285ab5 to disappear Apr 28 01:02:20.477: INFO: Pod pod-369dd606-d782-42d8-9a44-87e1c7285ab5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:02:20.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9047" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":206,"skipped":3477,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:02:20.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Apr 28 01:02:20.575: INFO: namespace kubectl-9400 Apr 28 01:02:20.575: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9400' Apr 28 01:02:20.921: INFO: stderr: "" Apr 28 01:02:20.921: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 28 01:02:21.926: INFO: Selector matched 1 pods for map[app:agnhost] Apr 28 01:02:21.926: INFO: Found 0 / 1 Apr 28 01:02:22.925: INFO: Selector matched 1 pods for map[app:agnhost] Apr 28 01:02:22.925: INFO: Found 0 / 1 Apr 28 01:02:24.086: INFO: Selector matched 1 pods for map[app:agnhost] Apr 28 01:02:24.086: INFO: Found 1 / 1 Apr 28 01:02:24.086: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 28 01:02:24.089: INFO: Selector matched 1 pods for map[app:agnhost] Apr 28 01:02:24.089: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 28 01:02:24.090: INFO: wait on agnhost-master startup in kubectl-9400 Apr 28 01:02:24.090: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs agnhost-master-shwlr agnhost-master --namespace=kubectl-9400' Apr 28 01:02:24.207: INFO: stderr: "" Apr 28 01:02:24.207: INFO: stdout: "Paused\n" STEP: exposing RC Apr 28 01:02:24.208: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-9400' Apr 28 01:02:24.343: INFO: stderr: "" Apr 28 01:02:24.343: INFO: stdout: "service/rm2 exposed\n" Apr 28 01:02:24.346: INFO: Service rm2 in namespace kubectl-9400 found. STEP: exposing service Apr 28 01:02:26.352: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-9400' Apr 28 01:02:26.472: INFO: stderr: "" Apr 28 01:02:26.472: INFO: stdout: "service/rm3 exposed\n" Apr 28 01:02:26.485: INFO: Service rm3 in namespace kubectl-9400 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:02:28.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9400" for this suite. • [SLOW TEST:8.013 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":275,"completed":207,"skipped":3485,"failed":0} SSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:02:28.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap that has name configmap-test-emptyKey-7a8c44e9-7572-4278-a469-c2423532c5e6 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:02:28.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4730" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":208,"skipped":3494,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:02:28.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 01:02:28.756: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 28 01:02:33.763: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 28 01:02:33.763: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 28 01:02:33.787: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-284 /apis/apps/v1/namespaces/deployment-284/deployments/test-cleanup-deployment 877a68e9-26b6-4763-9aae-9dfb811c1ec0 11596072 1 2020-04-28 01:02:33 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004205ea8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Apr 28 01:02:34.081: INFO: New ReplicaSet "test-cleanup-deployment-577c77b589" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-577c77b589 deployment-284 /apis/apps/v1/namespaces/deployment-284/replicasets/test-cleanup-deployment-577c77b589 9dca2a64-2b0b-4c12-b714-2e68cc3b8ce6 11596074 1 2020-04-28 01:02:33 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 877a68e9-26b6-4763-9aae-9dfb811c1ec0 0xc00365c8d7 0xc00365c8d8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 577c77b589,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00365cab8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 28 01:02:34.081: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Apr 28 01:02:34.081: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-284 /apis/apps/v1/namespaces/deployment-284/replicasets/test-cleanup-controller b0a3eaa6-5fc5-48ac-8c09-d0e31b76202b 11596073 1 2020-04-28 01:02:28 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 877a68e9-26b6-4763-9aae-9dfb811c1ec0 0xc00365c717 0xc00365c718}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00365c7f8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 28 01:02:34.086: INFO: Pod "test-cleanup-controller-7kg84" is available: &Pod{ObjectMeta:{test-cleanup-controller-7kg84 test-cleanup-controller- deployment-284 /api/v1/namespaces/deployment-284/pods/test-cleanup-controller-7kg84 28e7dcca-3cfc-45c4-ac41-967231ebe70c 11596045 0 2020-04-28 01:02:28 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller b0a3eaa6-5fc5-48ac-8c09-d0e31b76202b 0xc00365d207 0xc00365d208}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-49px4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-49px4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-49px4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:02:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:02:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:02:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:02:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.167,StartTime:2020-04-28 01:02:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-28 01:02:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://02f12ae93e0b79b76a0fe040539ce3e7d7239345e8cd4b8ab41b0a779fa4f54e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.167,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 01:02:34.086: INFO: Pod "test-cleanup-deployment-577c77b589-7nzhb" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-577c77b589-7nzhb test-cleanup-deployment-577c77b589- deployment-284 /api/v1/namespaces/deployment-284/pods/test-cleanup-deployment-577c77b589-7nzhb 3757d232-4f82-4a24-8249-f2593ff46810 11596080 0 2020-04-28 01:02:33 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-577c77b589 9dca2a64-2b0b-4c12-b714-2e68cc3b8ce6 0xc00365d417 0xc00365d418}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-49px4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-49px4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-49px4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:02:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:02:34.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-284" for this suite. • [SLOW TEST:5.671 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":209,"skipped":3534,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:02:34.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 01:02:34.407: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:02:34.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7157" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":275,"completed":210,"skipped":3561,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:02:35.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Apr 28 01:02:39.850: INFO: Successfully updated pod "adopt-release-2nffj" STEP: Checking that the Job readopts the Pod Apr 28 01:02:39.850: INFO: Waiting up to 15m0s for pod "adopt-release-2nffj" in namespace "job-1482" to be "adopted" Apr 28 01:02:39.874: INFO: Pod "adopt-release-2nffj": Phase="Running", Reason="", readiness=true. Elapsed: 23.493581ms Apr 28 01:02:41.879: INFO: Pod "adopt-release-2nffj": Phase="Running", Reason="", readiness=true. Elapsed: 2.028467928s Apr 28 01:02:41.879: INFO: Pod "adopt-release-2nffj" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Apr 28 01:02:42.387: INFO: Successfully updated pod "adopt-release-2nffj" STEP: Checking that the Job releases the Pod Apr 28 01:02:42.387: INFO: Waiting up to 15m0s for pod "adopt-release-2nffj" in namespace "job-1482" to be "released" Apr 28 01:02:42.392: INFO: Pod "adopt-release-2nffj": Phase="Running", Reason="", readiness=true. Elapsed: 4.760201ms Apr 28 01:02:44.396: INFO: Pod "adopt-release-2nffj": Phase="Running", Reason="", readiness=true. Elapsed: 2.009252028s Apr 28 01:02:44.397: INFO: Pod "adopt-release-2nffj" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:02:44.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1482" for this suite. • [SLOW TEST:9.356 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":211,"skipped":3579,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:02:44.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0428 01:03:24.637644 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 28 01:03:24.637: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:03:24.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3124" for this suite. • [SLOW TEST:40.236 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":212,"skipped":3633,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:03:24.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with configMap that has name projected-configmap-test-upd-474ab614-fab5-45d3-a4fa-ea2225117fac STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-474ab614-fab5-45d3-a4fa-ea2225117fac STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:03:31.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3221" for this suite. • [SLOW TEST:6.795 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":213,"skipped":3640,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:03:31.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token Apr 28 01:03:32.306: INFO: created pod pod-service-account-defaultsa Apr 28 01:03:32.306: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 28 01:03:32.310: INFO: created pod pod-service-account-mountsa Apr 28 01:03:32.310: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 28 01:03:32.358: INFO: created pod pod-service-account-nomountsa Apr 28 01:03:32.358: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 28 01:03:32.464: INFO: created pod pod-service-account-defaultsa-mountspec Apr 28 01:03:32.464: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 28 01:03:32.556: INFO: created pod pod-service-account-mountsa-mountspec Apr 28 01:03:32.556: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 28 01:03:32.640: INFO: created pod pod-service-account-nomountsa-mountspec Apr 28 01:03:32.640: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 28 01:03:32.829: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 28 01:03:32.829: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 28 01:03:32.918: INFO: created pod pod-service-account-mountsa-nomountspec Apr 28 01:03:32.918: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 28 01:03:33.043: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 28 01:03:33.043: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:03:33.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-342" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":275,"completed":214,"skipped":3684,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:03:33.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 28 01:03:38.475: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 28 01:03:41.392: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632618, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632618, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632618, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632618, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 28 01:03:43.504: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632618, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632618, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632618, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632618, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 28 01:03:45.451: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632618, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632618, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632618, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632618, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 28 01:03:47.794: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632618, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632618, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632618, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632618, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 28 01:03:49.500: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632618, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632618, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632618, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632618, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 28 01:03:52.412: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Apr 28 01:03:52.438: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:03:52.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8391" for this suite. STEP: Destroying namespace "webhook-8391-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:19.355 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":215,"skipped":3688,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:03:52.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 28 01:03:52.662: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1fbe8936-5457-43d0-a75a-da784704dedc" in namespace "projected-8540" to be "Succeeded or Failed" Apr 28 01:03:52.679: INFO: Pod "downwardapi-volume-1fbe8936-5457-43d0-a75a-da784704dedc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.320947ms Apr 28 01:03:54.682: INFO: Pod "downwardapi-volume-1fbe8936-5457-43d0-a75a-da784704dedc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019978809s Apr 28 01:03:56.686: INFO: Pod "downwardapi-volume-1fbe8936-5457-43d0-a75a-da784704dedc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024037122s STEP: Saw pod success Apr 28 01:03:56.686: INFO: Pod "downwardapi-volume-1fbe8936-5457-43d0-a75a-da784704dedc" satisfied condition "Succeeded or Failed" Apr 28 01:03:56.689: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-1fbe8936-5457-43d0-a75a-da784704dedc container client-container: STEP: delete the pod Apr 28 01:03:56.724: INFO: Waiting for pod downwardapi-volume-1fbe8936-5457-43d0-a75a-da784704dedc to disappear Apr 28 01:03:56.751: INFO: Pod downwardapi-volume-1fbe8936-5457-43d0-a75a-da784704dedc no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:03:56.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8540" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":216,"skipped":3700,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:03:56.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 01:03:56.807: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 28 01:03:59.749: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1318 create -f -' Apr 28 01:04:02.917: INFO: stderr: "" Apr 28 01:04:02.917: INFO: stdout: "e2e-test-crd-publish-openapi-3992-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 28 01:04:02.917: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1318 delete e2e-test-crd-publish-openapi-3992-crds test-cr' Apr 28 01:04:03.063: INFO: stderr: "" Apr 28 01:04:03.063: INFO: stdout: "e2e-test-crd-publish-openapi-3992-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Apr 28 01:04:03.064: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1318 apply -f -' Apr 28 01:04:03.311: INFO: stderr: "" Apr 28 01:04:03.311: INFO: stdout: "e2e-test-crd-publish-openapi-3992-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 28 01:04:03.311: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1318 delete e2e-test-crd-publish-openapi-3992-crds test-cr' Apr 28 01:04:03.423: INFO: stderr: "" Apr 28 01:04:03.423: INFO: stdout: "e2e-test-crd-publish-openapi-3992-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 28 01:04:03.423: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3992-crds' Apr 28 01:04:03.682: INFO: stderr: "" Apr 28 01:04:03.682: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3992-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:04:06.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1318" for this suite. • [SLOW TEST:9.807 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":217,"skipped":3718,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:04:06.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:04:06.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1965" for this suite. STEP: Destroying namespace "nspatchtest-5c809149-b311-49a9-9e40-9665f045331f-67" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":218,"skipped":3733,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:04:06.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:05:06.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5694" for this suite. • [SLOW TEST:60.080 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":219,"skipped":3739,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:05:06.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:05:22.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5999" for this suite. • [SLOW TEST:16.113 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":220,"skipped":3743,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:05:22.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 28 01:05:22.958: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6868 /api/v1/namespaces/watch-6868/configmaps/e2e-watch-test-watch-closed 16f5b9cf-a39d-4527-a69b-125862a85893 11597166 0 2020-04-28 01:05:22 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 28 01:05:22.959: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6868 /api/v1/namespaces/watch-6868/configmaps/e2e-watch-test-watch-closed 16f5b9cf-a39d-4527-a69b-125862a85893 11597167 0 2020-04-28 01:05:22 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 28 01:05:23.011: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6868 /api/v1/namespaces/watch-6868/configmaps/e2e-watch-test-watch-closed 16f5b9cf-a39d-4527-a69b-125862a85893 11597168 0 2020-04-28 01:05:22 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 28 01:05:23.012: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6868 /api/v1/namespaces/watch-6868/configmaps/e2e-watch-test-watch-closed 16f5b9cf-a39d-4527-a69b-125862a85893 11597170 0 2020-04-28 01:05:22 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:05:23.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6868" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":221,"skipped":3770,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:05:23.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-projected-all-test-volume-00d32be6-a9a5-49e8-b58d-9fabff842aff STEP: Creating secret with name secret-projected-all-test-volume-9bb78a07-b2b7-4390-b376-9536e2657f1f STEP: Creating a pod to test Check all projections for projected volume plugin Apr 28 01:05:23.101: INFO: Waiting up to 5m0s for pod "projected-volume-1f3b9640-28ad-46a2-a1dc-0ed8c7ddde4e" in namespace "projected-2029" to be "Succeeded or Failed" Apr 28 01:05:23.104: INFO: Pod "projected-volume-1f3b9640-28ad-46a2-a1dc-0ed8c7ddde4e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.338653ms Apr 28 01:05:25.108: INFO: Pod "projected-volume-1f3b9640-28ad-46a2-a1dc-0ed8c7ddde4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007347782s Apr 28 01:05:27.113: INFO: Pod "projected-volume-1f3b9640-28ad-46a2-a1dc-0ed8c7ddde4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011766732s STEP: Saw pod success Apr 28 01:05:27.113: INFO: Pod "projected-volume-1f3b9640-28ad-46a2-a1dc-0ed8c7ddde4e" satisfied condition "Succeeded or Failed" Apr 28 01:05:27.116: INFO: Trying to get logs from node latest-worker2 pod projected-volume-1f3b9640-28ad-46a2-a1dc-0ed8c7ddde4e container projected-all-volume-test: STEP: delete the pod Apr 28 01:05:27.161: INFO: Waiting for pod projected-volume-1f3b9640-28ad-46a2-a1dc-0ed8c7ddde4e to disappear Apr 28 01:05:27.176: INFO: Pod projected-volume-1f3b9640-28ad-46a2-a1dc-0ed8c7ddde4e no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:05:27.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2029" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":222,"skipped":3783,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:05:27.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 28 01:05:27.722: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 28 01:05:29.735: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632727, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632727, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632727, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632727, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 28 01:05:32.791: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:05:32.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9994" for this suite. STEP: Destroying namespace "webhook-9994-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.690 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":223,"skipped":3813,"failed":0} SS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:05:32.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 28 01:05:35.962: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:05:36.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7790" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":224,"skipped":3815,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:05:36.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's command Apr 28 01:05:36.120: INFO: Waiting up to 5m0s for pod "var-expansion-81c9e8f0-88de-4346-9407-a00e5f19e87b" in namespace "var-expansion-2581" to be "Succeeded or Failed" Apr 28 01:05:36.178: INFO: Pod "var-expansion-81c9e8f0-88de-4346-9407-a00e5f19e87b": Phase="Pending", Reason="", readiness=false. Elapsed: 58.009964ms Apr 28 01:05:38.195: INFO: Pod "var-expansion-81c9e8f0-88de-4346-9407-a00e5f19e87b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075602875s Apr 28 01:05:40.200: INFO: Pod "var-expansion-81c9e8f0-88de-4346-9407-a00e5f19e87b": Phase="Running", Reason="", readiness=true. Elapsed: 4.080215922s Apr 28 01:05:42.204: INFO: Pod "var-expansion-81c9e8f0-88de-4346-9407-a00e5f19e87b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.084781392s STEP: Saw pod success Apr 28 01:05:42.205: INFO: Pod "var-expansion-81c9e8f0-88de-4346-9407-a00e5f19e87b" satisfied condition "Succeeded or Failed" Apr 28 01:05:42.210: INFO: Trying to get logs from node latest-worker pod var-expansion-81c9e8f0-88de-4346-9407-a00e5f19e87b container dapi-container: STEP: delete the pod Apr 28 01:05:42.232: INFO: Waiting for pod var-expansion-81c9e8f0-88de-4346-9407-a00e5f19e87b to disappear Apr 28 01:05:42.237: INFO: Pod var-expansion-81c9e8f0-88de-4346-9407-a00e5f19e87b no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:05:42.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2581" for this suite. • [SLOW TEST:6.195 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":225,"skipped":3832,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:05:42.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 28 01:05:42.661: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 28 01:05:44.672: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632742, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632742, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632742, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632742, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 28 01:05:47.685: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:05:59.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3292" for this suite. STEP: Destroying namespace "webhook-3292-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.707 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":226,"skipped":3844,"failed":0} SSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:05:59.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 28 01:06:00.011: INFO: Waiting up to 5m0s for pod "downward-api-fe42dfe4-6fd3-499f-b017-c2211748708f" in namespace "downward-api-3390" to be "Succeeded or Failed" Apr 28 01:06:00.034: INFO: Pod "downward-api-fe42dfe4-6fd3-499f-b017-c2211748708f": Phase="Pending", Reason="", readiness=false. Elapsed: 22.279202ms Apr 28 01:06:02.038: INFO: Pod "downward-api-fe42dfe4-6fd3-499f-b017-c2211748708f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026754394s Apr 28 01:06:04.042: INFO: Pod "downward-api-fe42dfe4-6fd3-499f-b017-c2211748708f": Phase="Running", Reason="", readiness=true. Elapsed: 4.030938304s Apr 28 01:06:06.047: INFO: Pod "downward-api-fe42dfe4-6fd3-499f-b017-c2211748708f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035217214s STEP: Saw pod success Apr 28 01:06:06.047: INFO: Pod "downward-api-fe42dfe4-6fd3-499f-b017-c2211748708f" satisfied condition "Succeeded or Failed" Apr 28 01:06:06.050: INFO: Trying to get logs from node latest-worker2 pod downward-api-fe42dfe4-6fd3-499f-b017-c2211748708f container dapi-container: STEP: delete the pod Apr 28 01:06:06.118: INFO: Waiting for pod downward-api-fe42dfe4-6fd3-499f-b017-c2211748708f to disappear Apr 28 01:06:06.121: INFO: Pod downward-api-fe42dfe4-6fd3-499f-b017-c2211748708f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:06:06.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3390" for this suite. • [SLOW TEST:6.177 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":227,"skipped":3851,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:06:06.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Apr 28 01:06:06.175: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:06:21.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5447" for this suite. • [SLOW TEST:15.416 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":228,"skipped":3853,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:06:21.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:06:52.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8766" for this suite. • [SLOW TEST:31.155 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":229,"skipped":3894,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:06:52.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 28 01:06:52.786: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f6ac93e2-87bc-4e1e-b84f-e36e18e852bc" in namespace "downward-api-5198" to be "Succeeded or Failed" Apr 28 01:06:52.795: INFO: Pod "downwardapi-volume-f6ac93e2-87bc-4e1e-b84f-e36e18e852bc": Phase="Pending", Reason="", readiness=false. Elapsed: 9.29839ms Apr 28 01:06:54.799: INFO: Pod "downwardapi-volume-f6ac93e2-87bc-4e1e-b84f-e36e18e852bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013216145s Apr 28 01:06:56.803: INFO: Pod "downwardapi-volume-f6ac93e2-87bc-4e1e-b84f-e36e18e852bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017426195s STEP: Saw pod success Apr 28 01:06:56.803: INFO: Pod "downwardapi-volume-f6ac93e2-87bc-4e1e-b84f-e36e18e852bc" satisfied condition "Succeeded or Failed" Apr 28 01:06:56.807: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-f6ac93e2-87bc-4e1e-b84f-e36e18e852bc container client-container: STEP: delete the pod Apr 28 01:06:56.848: INFO: Waiting for pod downwardapi-volume-f6ac93e2-87bc-4e1e-b84f-e36e18e852bc to disappear Apr 28 01:06:56.891: INFO: Pod downwardapi-volume-f6ac93e2-87bc-4e1e-b84f-e36e18e852bc no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:06:56.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5198" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":230,"skipped":3915,"failed":0} SSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:06:56.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 01:06:56.942: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-1856 I0428 01:06:56.965674 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-1856, replica count: 1 I0428 01:06:58.016077 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0428 01:06:59.016279 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0428 01:07:00.016489 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0428 01:07:01.016785 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 28 01:07:01.141: INFO: Created: latency-svc-5xvxt Apr 28 01:07:01.155: INFO: Got endpoints: latency-svc-5xvxt [38.455417ms] Apr 28 01:07:01.178: INFO: Created: latency-svc-m8xhr Apr 28 01:07:01.190: INFO: Got endpoints: latency-svc-m8xhr [34.578883ms] Apr 28 01:07:01.207: INFO: Created: latency-svc-t5sqk Apr 28 01:07:01.251: INFO: Got endpoints: latency-svc-t5sqk [95.454833ms] Apr 28 01:07:01.268: INFO: Created: latency-svc-nfkwl Apr 28 01:07:01.281: INFO: Got endpoints: latency-svc-nfkwl [125.489723ms] Apr 28 01:07:01.317: INFO: Created: latency-svc-w9zvt Apr 28 01:07:01.328: INFO: Got endpoints: latency-svc-w9zvt [173.356861ms] Apr 28 01:07:01.346: INFO: Created: latency-svc-hvhtg Apr 28 01:07:01.413: INFO: Got endpoints: latency-svc-hvhtg [257.468922ms] Apr 28 01:07:01.416: INFO: Created: latency-svc-kmdqg Apr 28 01:07:01.424: INFO: Got endpoints: latency-svc-kmdqg [269.276517ms] Apr 28 01:07:01.441: INFO: Created: latency-svc-wf9v4 Apr 28 01:07:01.455: INFO: Got endpoints: latency-svc-wf9v4 [299.520788ms] Apr 28 01:07:01.471: INFO: Created: latency-svc-t9fzz Apr 28 01:07:01.485: INFO: Got endpoints: latency-svc-t9fzz [329.603154ms] Apr 28 01:07:01.503: INFO: Created: latency-svc-jszvv Apr 28 01:07:01.538: INFO: Got endpoints: latency-svc-jszvv [382.884239ms] Apr 28 01:07:01.550: INFO: Created: latency-svc-7gcnj Apr 28 01:07:01.567: INFO: Got endpoints: latency-svc-7gcnj [412.013497ms] Apr 28 01:07:01.587: INFO: Created: latency-svc-b5kv2 Apr 28 01:07:01.603: INFO: Got endpoints: latency-svc-b5kv2 [448.223759ms] Apr 28 01:07:01.623: INFO: Created: latency-svc-28s6g Apr 28 01:07:01.700: INFO: Got endpoints: latency-svc-28s6g [544.608523ms] Apr 28 01:07:01.702: INFO: Created: latency-svc-46sh2 Apr 28 01:07:01.735: INFO: Got endpoints: latency-svc-46sh2 [580.237185ms] Apr 28 01:07:01.753: INFO: Created: latency-svc-xkcrr Apr 28 01:07:01.778: INFO: Got endpoints: latency-svc-xkcrr [623.004051ms] Apr 28 01:07:01.832: INFO: Created: latency-svc-c4rjv Apr 28 01:07:01.837: INFO: Got endpoints: latency-svc-c4rjv [681.87857ms] Apr 28 01:07:01.858: INFO: Created: latency-svc-qsfwl Apr 28 01:07:01.874: INFO: Got endpoints: latency-svc-qsfwl [684.109158ms] Apr 28 01:07:01.891: INFO: Created: latency-svc-l6bxk Apr 28 01:07:01.904: INFO: Got endpoints: latency-svc-l6bxk [653.374807ms] Apr 28 01:07:01.921: INFO: Created: latency-svc-s2g9x Apr 28 01:07:01.970: INFO: Got endpoints: latency-svc-s2g9x [688.888318ms] Apr 28 01:07:01.971: INFO: Created: latency-svc-rtv5s Apr 28 01:07:01.975: INFO: Got endpoints: latency-svc-rtv5s [646.656191ms] Apr 28 01:07:02.018: INFO: Created: latency-svc-lb5zg Apr 28 01:07:02.031: INFO: Got endpoints: latency-svc-lb5zg [618.598071ms] Apr 28 01:07:02.048: INFO: Created: latency-svc-bjnqz Apr 28 01:07:02.060: INFO: Got endpoints: latency-svc-bjnqz [635.03893ms] Apr 28 01:07:02.101: INFO: Created: latency-svc-9gqlm Apr 28 01:07:02.107: INFO: Got endpoints: latency-svc-9gqlm [652.651616ms] Apr 28 01:07:02.180: INFO: Created: latency-svc-h26fq Apr 28 01:07:02.221: INFO: Got endpoints: latency-svc-h26fq [736.24514ms] Apr 28 01:07:02.246: INFO: Created: latency-svc-dlzjd Apr 28 01:07:02.268: INFO: Got endpoints: latency-svc-dlzjd [729.96537ms] Apr 28 01:07:02.294: INFO: Created: latency-svc-btqxc Apr 28 01:07:02.310: INFO: Got endpoints: latency-svc-btqxc [742.855987ms] Apr 28 01:07:02.364: INFO: Created: latency-svc-c7kh4 Apr 28 01:07:02.407: INFO: Created: latency-svc-hhmk7 Apr 28 01:07:02.407: INFO: Got endpoints: latency-svc-c7kh4 [803.962277ms] Apr 28 01:07:02.418: INFO: Got endpoints: latency-svc-hhmk7 [718.160911ms] Apr 28 01:07:02.438: INFO: Created: latency-svc-gdxbn Apr 28 01:07:02.460: INFO: Got endpoints: latency-svc-gdxbn [724.698036ms] Apr 28 01:07:02.514: INFO: Created: latency-svc-k9bpj Apr 28 01:07:02.558: INFO: Got endpoints: latency-svc-k9bpj [779.616657ms] Apr 28 01:07:02.652: INFO: Created: latency-svc-859cb Apr 28 01:07:02.671: INFO: Got endpoints: latency-svc-859cb [833.976196ms] Apr 28 01:07:02.672: INFO: Created: latency-svc-l7hjc Apr 28 01:07:02.689: INFO: Got endpoints: latency-svc-l7hjc [814.466318ms] Apr 28 01:07:02.714: INFO: Created: latency-svc-vw6fl Apr 28 01:07:02.736: INFO: Got endpoints: latency-svc-vw6fl [832.31418ms] Apr 28 01:07:02.828: INFO: Created: latency-svc-wvn9m Apr 28 01:07:02.844: INFO: Got endpoints: latency-svc-wvn9m [874.373122ms] Apr 28 01:07:02.869: INFO: Created: latency-svc-mfzsn Apr 28 01:07:02.874: INFO: Got endpoints: latency-svc-mfzsn [899.313302ms] Apr 28 01:07:02.910: INFO: Created: latency-svc-x7sj6 Apr 28 01:07:02.930: INFO: Got endpoints: latency-svc-x7sj6 [898.50278ms] Apr 28 01:07:02.954: INFO: Created: latency-svc-7kltm Apr 28 01:07:02.964: INFO: Got endpoints: latency-svc-7kltm [904.841507ms] Apr 28 01:07:02.984: INFO: Created: latency-svc-bxjs2 Apr 28 01:07:02.999: INFO: Got endpoints: latency-svc-bxjs2 [891.481264ms] Apr 28 01:07:03.047: INFO: Created: latency-svc-7s7n7 Apr 28 01:07:03.053: INFO: Got endpoints: latency-svc-7s7n7 [831.866495ms] Apr 28 01:07:03.073: INFO: Created: latency-svc-hqn7m Apr 28 01:07:03.090: INFO: Got endpoints: latency-svc-hqn7m [822.068708ms] Apr 28 01:07:03.103: INFO: Created: latency-svc-9v542 Apr 28 01:07:03.113: INFO: Got endpoints: latency-svc-9v542 [802.866803ms] Apr 28 01:07:03.127: INFO: Created: latency-svc-9nbsh Apr 28 01:07:03.137: INFO: Got endpoints: latency-svc-9nbsh [729.547922ms] Apr 28 01:07:03.185: INFO: Created: latency-svc-c5ns7 Apr 28 01:07:03.206: INFO: Created: latency-svc-jx724 Apr 28 01:07:03.206: INFO: Got endpoints: latency-svc-c5ns7 [787.490345ms] Apr 28 01:07:03.221: INFO: Got endpoints: latency-svc-jx724 [760.461787ms] Apr 28 01:07:03.266: INFO: Created: latency-svc-rs9kl Apr 28 01:07:03.328: INFO: Got endpoints: latency-svc-rs9kl [770.346621ms] Apr 28 01:07:03.336: INFO: Created: latency-svc-lr7gm Apr 28 01:07:03.374: INFO: Got endpoints: latency-svc-lr7gm [702.60761ms] Apr 28 01:07:03.409: INFO: Created: latency-svc-thsvq Apr 28 01:07:03.419: INFO: Got endpoints: latency-svc-thsvq [730.290765ms] Apr 28 01:07:03.485: INFO: Created: latency-svc-fcc95 Apr 28 01:07:03.492: INFO: Got endpoints: latency-svc-fcc95 [755.601467ms] Apr 28 01:07:03.523: INFO: Created: latency-svc-gbz9f Apr 28 01:07:03.539: INFO: Got endpoints: latency-svc-gbz9f [695.084231ms] Apr 28 01:07:03.565: INFO: Created: latency-svc-944tb Apr 28 01:07:03.581: INFO: Got endpoints: latency-svc-944tb [706.956616ms] Apr 28 01:07:03.614: INFO: Created: latency-svc-wcvxg Apr 28 01:07:03.656: INFO: Got endpoints: latency-svc-wcvxg [725.813685ms] Apr 28 01:07:03.685: INFO: Created: latency-svc-lp7pr Apr 28 01:07:03.717: INFO: Got endpoints: latency-svc-lp7pr [753.003315ms] Apr 28 01:07:03.769: INFO: Created: latency-svc-9wsjq Apr 28 01:07:03.778: INFO: Got endpoints: latency-svc-9wsjq [778.537638ms] Apr 28 01:07:03.805: INFO: Created: latency-svc-tgjn5 Apr 28 01:07:03.814: INFO: Got endpoints: latency-svc-tgjn5 [760.547042ms] Apr 28 01:07:03.855: INFO: Created: latency-svc-2fm96 Apr 28 01:07:03.862: INFO: Got endpoints: latency-svc-2fm96 [771.186421ms] Apr 28 01:07:03.902: INFO: Created: latency-svc-l4nkm Apr 28 01:07:03.916: INFO: Got endpoints: latency-svc-l4nkm [802.791754ms] Apr 28 01:07:03.938: INFO: Created: latency-svc-7cgdm Apr 28 01:07:03.993: INFO: Got endpoints: latency-svc-7cgdm [856.278619ms] Apr 28 01:07:04.015: INFO: Created: latency-svc-657rp Apr 28 01:07:04.031: INFO: Got endpoints: latency-svc-657rp [824.973218ms] Apr 28 01:07:04.056: INFO: Created: latency-svc-6l895 Apr 28 01:07:04.078: INFO: Got endpoints: latency-svc-6l895 [857.66417ms] Apr 28 01:07:04.137: INFO: Created: latency-svc-vlm7r Apr 28 01:07:04.161: INFO: Got endpoints: latency-svc-vlm7r [832.314237ms] Apr 28 01:07:04.167: INFO: Created: latency-svc-fwcgd Apr 28 01:07:04.183: INFO: Created: latency-svc-s4k87 Apr 28 01:07:04.183: INFO: Got endpoints: latency-svc-fwcgd [809.479911ms] Apr 28 01:07:04.200: INFO: Got endpoints: latency-svc-s4k87 [781.117269ms] Apr 28 01:07:04.237: INFO: Created: latency-svc-t5gqv Apr 28 01:07:04.293: INFO: Got endpoints: latency-svc-t5gqv [800.705882ms] Apr 28 01:07:04.295: INFO: Created: latency-svc-tdq5h Apr 28 01:07:04.300: INFO: Got endpoints: latency-svc-tdq5h [760.924538ms] Apr 28 01:07:04.321: INFO: Created: latency-svc-xhfmv Apr 28 01:07:04.335: INFO: Got endpoints: latency-svc-xhfmv [753.497206ms] Apr 28 01:07:04.352: INFO: Created: latency-svc-khgkq Apr 28 01:07:04.365: INFO: Got endpoints: latency-svc-khgkq [709.194644ms] Apr 28 01:07:04.438: INFO: Created: latency-svc-k5cf4 Apr 28 01:07:04.465: INFO: Got endpoints: latency-svc-k5cf4 [747.242175ms] Apr 28 01:07:04.466: INFO: Created: latency-svc-8g67z Apr 28 01:07:04.479: INFO: Got endpoints: latency-svc-8g67z [701.237088ms] Apr 28 01:07:04.531: INFO: Created: latency-svc-mqxzq Apr 28 01:07:04.575: INFO: Got endpoints: latency-svc-mqxzq [760.770497ms] Apr 28 01:07:04.579: INFO: Created: latency-svc-bgfbw Apr 28 01:07:04.605: INFO: Got endpoints: latency-svc-bgfbw [743.60492ms] Apr 28 01:07:04.621: INFO: Created: latency-svc-sgwcs Apr 28 01:07:04.635: INFO: Got endpoints: latency-svc-sgwcs [719.264914ms] Apr 28 01:07:04.656: INFO: Created: latency-svc-rs45w Apr 28 01:07:04.706: INFO: Got endpoints: latency-svc-rs45w [712.950948ms] Apr 28 01:07:04.723: INFO: Created: latency-svc-qt58k Apr 28 01:07:04.738: INFO: Got endpoints: latency-svc-qt58k [707.178338ms] Apr 28 01:07:04.758: INFO: Created: latency-svc-9z95q Apr 28 01:07:04.774: INFO: Got endpoints: latency-svc-9z95q [695.525251ms] Apr 28 01:07:04.795: INFO: Created: latency-svc-zxlj7 Apr 28 01:07:04.886: INFO: Got endpoints: latency-svc-zxlj7 [724.854288ms] Apr 28 01:07:04.887: INFO: Created: latency-svc-v84gc Apr 28 01:07:04.893: INFO: Got endpoints: latency-svc-v84gc [709.887895ms] Apr 28 01:07:04.915: INFO: Created: latency-svc-gg7qj Apr 28 01:07:04.929: INFO: Got endpoints: latency-svc-gg7qj [728.916951ms] Apr 28 01:07:04.950: INFO: Created: latency-svc-49xld Apr 28 01:07:04.966: INFO: Got endpoints: latency-svc-49xld [673.026755ms] Apr 28 01:07:05.018: INFO: Created: latency-svc-chw6f Apr 28 01:07:05.035: INFO: Created: latency-svc-skdwh Apr 28 01:07:05.035: INFO: Got endpoints: latency-svc-chw6f [734.678892ms] Apr 28 01:07:05.048: INFO: Got endpoints: latency-svc-skdwh [713.383309ms] Apr 28 01:07:05.065: INFO: Created: latency-svc-lhbp5 Apr 28 01:07:05.078: INFO: Got endpoints: latency-svc-lhbp5 [712.655141ms] Apr 28 01:07:05.096: INFO: Created: latency-svc-6nml7 Apr 28 01:07:05.108: INFO: Got endpoints: latency-svc-6nml7 [643.676009ms] Apr 28 01:07:05.143: INFO: Created: latency-svc-w5jgn Apr 28 01:07:05.150: INFO: Got endpoints: latency-svc-w5jgn [671.412198ms] Apr 28 01:07:05.166: INFO: Created: latency-svc-6cjkn Apr 28 01:07:05.203: INFO: Got endpoints: latency-svc-6cjkn [628.685317ms] Apr 28 01:07:05.233: INFO: Created: latency-svc-qctj8 Apr 28 01:07:05.293: INFO: Got endpoints: latency-svc-qctj8 [687.908543ms] Apr 28 01:07:05.296: INFO: Created: latency-svc-mg5vs Apr 28 01:07:05.300: INFO: Got endpoints: latency-svc-mg5vs [665.187772ms] Apr 28 01:07:05.322: INFO: Created: latency-svc-wftxd Apr 28 01:07:05.358: INFO: Got endpoints: latency-svc-wftxd [651.713929ms] Apr 28 01:07:05.389: INFO: Created: latency-svc-26m58 Apr 28 01:07:05.418: INFO: Got endpoints: latency-svc-26m58 [680.345947ms] Apr 28 01:07:05.431: INFO: Created: latency-svc-fsx82 Apr 28 01:07:05.445: INFO: Got endpoints: latency-svc-fsx82 [671.011229ms] Apr 28 01:07:05.467: INFO: Created: latency-svc-qgbhg Apr 28 01:07:05.481: INFO: Got endpoints: latency-svc-qgbhg [595.364584ms] Apr 28 01:07:05.504: INFO: Created: latency-svc-9f2qn Apr 28 01:07:05.517: INFO: Got endpoints: latency-svc-9f2qn [623.712555ms] Apr 28 01:07:05.556: INFO: Created: latency-svc-ck6qj Apr 28 01:07:05.581: INFO: Created: latency-svc-vzxr4 Apr 28 01:07:05.581: INFO: Got endpoints: latency-svc-ck6qj [651.550131ms] Apr 28 01:07:05.595: INFO: Got endpoints: latency-svc-vzxr4 [629.242816ms] Apr 28 01:07:05.611: INFO: Created: latency-svc-8h6kr Apr 28 01:07:05.636: INFO: Got endpoints: latency-svc-8h6kr [600.560208ms] Apr 28 01:07:05.712: INFO: Created: latency-svc-cqdbx Apr 28 01:07:05.725: INFO: Got endpoints: latency-svc-cqdbx [676.609911ms] Apr 28 01:07:05.754: INFO: Created: latency-svc-4hzlv Apr 28 01:07:05.791: INFO: Got endpoints: latency-svc-4hzlv [712.904549ms] Apr 28 01:07:05.849: INFO: Created: latency-svc-vnh2n Apr 28 01:07:05.869: INFO: Created: latency-svc-gq6vw Apr 28 01:07:05.869: INFO: Got endpoints: latency-svc-vnh2n [760.816958ms] Apr 28 01:07:05.881: INFO: Got endpoints: latency-svc-gq6vw [730.591841ms] Apr 28 01:07:05.899: INFO: Created: latency-svc-6xwgg Apr 28 01:07:05.911: INFO: Got endpoints: latency-svc-6xwgg [707.678127ms] Apr 28 01:07:05.930: INFO: Created: latency-svc-rpm9l Apr 28 01:07:05.943: INFO: Got endpoints: latency-svc-rpm9l [649.248878ms] Apr 28 01:07:05.987: INFO: Created: latency-svc-tbjln Apr 28 01:07:06.000: INFO: Got endpoints: latency-svc-tbjln [699.583248ms] Apr 28 01:07:06.001: INFO: Created: latency-svc-9vbl5 Apr 28 01:07:06.014: INFO: Got endpoints: latency-svc-9vbl5 [656.108694ms] Apr 28 01:07:06.031: INFO: Created: latency-svc-fncpj Apr 28 01:07:06.044: INFO: Got endpoints: latency-svc-fncpj [625.65905ms] Apr 28 01:07:06.061: INFO: Created: latency-svc-r8vnh Apr 28 01:07:06.074: INFO: Got endpoints: latency-svc-r8vnh [629.143597ms] Apr 28 01:07:06.119: INFO: Created: latency-svc-lvhbt Apr 28 01:07:06.139: INFO: Got endpoints: latency-svc-lvhbt [657.759153ms] Apr 28 01:07:06.139: INFO: Created: latency-svc-nbwrk Apr 28 01:07:06.150: INFO: Got endpoints: latency-svc-nbwrk [633.255173ms] Apr 28 01:07:06.180: INFO: Created: latency-svc-zdbwj Apr 28 01:07:06.194: INFO: Got endpoints: latency-svc-zdbwj [613.489537ms] Apr 28 01:07:06.210: INFO: Created: latency-svc-2rcg9 Apr 28 01:07:06.257: INFO: Got endpoints: latency-svc-2rcg9 [662.029686ms] Apr 28 01:07:06.277: INFO: Created: latency-svc-vrt66 Apr 28 01:07:06.294: INFO: Got endpoints: latency-svc-vrt66 [658.667759ms] Apr 28 01:07:06.313: INFO: Created: latency-svc-gcvm2 Apr 28 01:07:06.331: INFO: Got endpoints: latency-svc-gcvm2 [606.041276ms] Apr 28 01:07:06.355: INFO: Created: latency-svc-vvkk6 Apr 28 01:07:06.388: INFO: Got endpoints: latency-svc-vvkk6 [597.494988ms] Apr 28 01:07:06.402: INFO: Created: latency-svc-7kt5l Apr 28 01:07:06.414: INFO: Got endpoints: latency-svc-7kt5l [544.965854ms] Apr 28 01:07:06.432: INFO: Created: latency-svc-8849v Apr 28 01:07:06.446: INFO: Got endpoints: latency-svc-8849v [564.949685ms] Apr 28 01:07:06.463: INFO: Created: latency-svc-22sp7 Apr 28 01:07:06.475: INFO: Got endpoints: latency-svc-22sp7 [564.116428ms] Apr 28 01:07:06.532: INFO: Created: latency-svc-qc5fx Apr 28 01:07:06.548: INFO: Got endpoints: latency-svc-qc5fx [604.977771ms] Apr 28 01:07:06.548: INFO: Created: latency-svc-nxbkz Apr 28 01:07:06.559: INFO: Got endpoints: latency-svc-nxbkz [558.875966ms] Apr 28 01:07:06.607: INFO: Created: latency-svc-6vdgm Apr 28 01:07:06.658: INFO: Got endpoints: latency-svc-6vdgm [643.554006ms] Apr 28 01:07:06.678: INFO: Created: latency-svc-m8gx2 Apr 28 01:07:06.691: INFO: Got endpoints: latency-svc-m8gx2 [646.907202ms] Apr 28 01:07:06.709: INFO: Created: latency-svc-chbsz Apr 28 01:07:06.721: INFO: Got endpoints: latency-svc-chbsz [647.101612ms] Apr 28 01:07:06.740: INFO: Created: latency-svc-x7kzd Apr 28 01:07:06.796: INFO: Got endpoints: latency-svc-x7kzd [656.6125ms] Apr 28 01:07:06.823: INFO: Created: latency-svc-57pv7 Apr 28 01:07:06.864: INFO: Got endpoints: latency-svc-57pv7 [713.510922ms] Apr 28 01:07:06.882: INFO: Created: latency-svc-nhl8g Apr 28 01:07:06.942: INFO: Got endpoints: latency-svc-nhl8g [747.740754ms] Apr 28 01:07:06.960: INFO: Created: latency-svc-bjssn Apr 28 01:07:06.971: INFO: Got endpoints: latency-svc-bjssn [713.878902ms] Apr 28 01:07:06.990: INFO: Created: latency-svc-jxl6j Apr 28 01:07:07.008: INFO: Got endpoints: latency-svc-jxl6j [713.332799ms] Apr 28 01:07:07.059: INFO: Created: latency-svc-hz78d Apr 28 01:07:07.081: INFO: Got endpoints: latency-svc-hz78d [749.77008ms] Apr 28 01:07:07.082: INFO: Created: latency-svc-cwgqw Apr 28 01:07:07.098: INFO: Got endpoints: latency-svc-cwgqw [709.705133ms] Apr 28 01:07:07.122: INFO: Created: latency-svc-mb8zd Apr 28 01:07:07.134: INFO: Got endpoints: latency-svc-mb8zd [720.093092ms] Apr 28 01:07:07.152: INFO: Created: latency-svc-vkhsf Apr 28 01:07:07.185: INFO: Got endpoints: latency-svc-vkhsf [738.688294ms] Apr 28 01:07:07.194: INFO: Created: latency-svc-7rg4b Apr 28 01:07:07.206: INFO: Got endpoints: latency-svc-7rg4b [731.102304ms] Apr 28 01:07:07.231: INFO: Created: latency-svc-dmxr8 Apr 28 01:07:07.242: INFO: Got endpoints: latency-svc-dmxr8 [694.744405ms] Apr 28 01:07:07.261: INFO: Created: latency-svc-rmvq4 Apr 28 01:07:07.272: INFO: Got endpoints: latency-svc-rmvq4 [713.000497ms] Apr 28 01:07:07.329: INFO: Created: latency-svc-8sp2q Apr 28 01:07:07.356: INFO: Got endpoints: latency-svc-8sp2q [697.989988ms] Apr 28 01:07:07.356: INFO: Created: latency-svc-6k9b4 Apr 28 01:07:07.380: INFO: Got endpoints: latency-svc-6k9b4 [688.848018ms] Apr 28 01:07:07.411: INFO: Created: latency-svc-8nwbj Apr 28 01:07:07.427: INFO: Got endpoints: latency-svc-8nwbj [705.380223ms] Apr 28 01:07:07.485: INFO: Created: latency-svc-nk6k9 Apr 28 01:07:07.487: INFO: Got endpoints: latency-svc-nk6k9 [691.105003ms] Apr 28 01:07:07.520: INFO: Created: latency-svc-hz7q9 Apr 28 01:07:07.554: INFO: Got endpoints: latency-svc-hz7q9 [690.184388ms] Apr 28 01:07:07.580: INFO: Created: latency-svc-j99s4 Apr 28 01:07:07.634: INFO: Got endpoints: latency-svc-j99s4 [691.803961ms] Apr 28 01:07:07.635: INFO: Created: latency-svc-6txrn Apr 28 01:07:07.648: INFO: Got endpoints: latency-svc-6txrn [677.2754ms] Apr 28 01:07:07.675: INFO: Created: latency-svc-jv7ln Apr 28 01:07:07.705: INFO: Got endpoints: latency-svc-jv7ln [697.077636ms] Apr 28 01:07:07.780: INFO: Created: latency-svc-fqxqz Apr 28 01:07:07.800: INFO: Created: latency-svc-ngsbr Apr 28 01:07:07.800: INFO: Got endpoints: latency-svc-fqxqz [719.301048ms] Apr 28 01:07:07.818: INFO: Got endpoints: latency-svc-ngsbr [719.488063ms] Apr 28 01:07:07.836: INFO: Created: latency-svc-7m48m Apr 28 01:07:07.867: INFO: Got endpoints: latency-svc-7m48m [732.153438ms] Apr 28 01:07:07.915: INFO: Created: latency-svc-mz785 Apr 28 01:07:07.933: INFO: Got endpoints: latency-svc-mz785 [747.809549ms] Apr 28 01:07:07.933: INFO: Created: latency-svc-zt2zv Apr 28 01:07:07.943: INFO: Got endpoints: latency-svc-zt2zv [736.807274ms] Apr 28 01:07:07.962: INFO: Created: latency-svc-lg9ts Apr 28 01:07:07.979: INFO: Got endpoints: latency-svc-lg9ts [736.688019ms] Apr 28 01:07:07.998: INFO: Created: latency-svc-9d8td Apr 28 01:07:08.053: INFO: Got endpoints: latency-svc-9d8td [781.263786ms] Apr 28 01:07:08.056: INFO: Created: latency-svc-jd9vs Apr 28 01:07:08.063: INFO: Got endpoints: latency-svc-jd9vs [706.833689ms] Apr 28 01:07:08.101: INFO: Created: latency-svc-wvmdn Apr 28 01:07:08.122: INFO: Got endpoints: latency-svc-wvmdn [742.39141ms] Apr 28 01:07:08.143: INFO: Created: latency-svc-ktvgh Apr 28 01:07:08.173: INFO: Got endpoints: latency-svc-ktvgh [746.151943ms] Apr 28 01:07:08.184: INFO: Created: latency-svc-6vzbm Apr 28 01:07:08.200: INFO: Got endpoints: latency-svc-6vzbm [713.380835ms] Apr 28 01:07:08.250: INFO: Created: latency-svc-d4tt9 Apr 28 01:07:08.266: INFO: Got endpoints: latency-svc-d4tt9 [711.628491ms] Apr 28 01:07:08.304: INFO: Created: latency-svc-spsnn Apr 28 01:07:08.326: INFO: Got endpoints: latency-svc-spsnn [691.747534ms] Apr 28 01:07:08.347: INFO: Created: latency-svc-z44xb Apr 28 01:07:08.362: INFO: Got endpoints: latency-svc-z44xb [713.185441ms] Apr 28 01:07:08.382: INFO: Created: latency-svc-7ttnf Apr 28 01:07:08.412: INFO: Got endpoints: latency-svc-7ttnf [707.439145ms] Apr 28 01:07:08.424: INFO: Created: latency-svc-mzr2q Apr 28 01:07:08.440: INFO: Got endpoints: latency-svc-mzr2q [639.792889ms] Apr 28 01:07:08.460: INFO: Created: latency-svc-ttmnx Apr 28 01:07:08.476: INFO: Got endpoints: latency-svc-ttmnx [658.871924ms] Apr 28 01:07:08.509: INFO: Created: latency-svc-vt74s Apr 28 01:07:08.538: INFO: Got endpoints: latency-svc-vt74s [671.262456ms] Apr 28 01:07:08.551: INFO: Created: latency-svc-nqq2n Apr 28 01:07:08.560: INFO: Got endpoints: latency-svc-nqq2n [627.473162ms] Apr 28 01:07:08.593: INFO: Created: latency-svc-955xm Apr 28 01:07:08.608: INFO: Got endpoints: latency-svc-955xm [665.138872ms] Apr 28 01:07:08.632: INFO: Created: latency-svc-dszzg Apr 28 01:07:08.658: INFO: Got endpoints: latency-svc-dszzg [678.956269ms] Apr 28 01:07:08.669: INFO: Created: latency-svc-flj62 Apr 28 01:07:08.686: INFO: Got endpoints: latency-svc-flj62 [632.13679ms] Apr 28 01:07:08.700: INFO: Created: latency-svc-7hjdc Apr 28 01:07:08.709: INFO: Got endpoints: latency-svc-7hjdc [646.116467ms] Apr 28 01:07:08.731: INFO: Created: latency-svc-59gh8 Apr 28 01:07:08.745: INFO: Got endpoints: latency-svc-59gh8 [622.730181ms] Apr 28 01:07:08.796: INFO: Created: latency-svc-thdzb Apr 28 01:07:08.826: INFO: Got endpoints: latency-svc-thdzb [653.020928ms] Apr 28 01:07:08.827: INFO: Created: latency-svc-rm6ct Apr 28 01:07:08.850: INFO: Got endpoints: latency-svc-rm6ct [649.844443ms] Apr 28 01:07:08.868: INFO: Created: latency-svc-9rbjp Apr 28 01:07:08.877: INFO: Got endpoints: latency-svc-9rbjp [611.099909ms] Apr 28 01:07:08.892: INFO: Created: latency-svc-p8ltt Apr 28 01:07:08.946: INFO: Got endpoints: latency-svc-p8ltt [619.887825ms] Apr 28 01:07:08.947: INFO: Created: latency-svc-skm2p Apr 28 01:07:08.974: INFO: Got endpoints: latency-svc-skm2p [612.290772ms] Apr 28 01:07:08.995: INFO: Created: latency-svc-fhklz Apr 28 01:07:09.004: INFO: Got endpoints: latency-svc-fhklz [591.287133ms] Apr 28 01:07:09.019: INFO: Created: latency-svc-5jrmj Apr 28 01:07:09.028: INFO: Got endpoints: latency-svc-5jrmj [587.498429ms] Apr 28 01:07:09.077: INFO: Created: latency-svc-twfct Apr 28 01:07:09.096: INFO: Got endpoints: latency-svc-twfct [619.672939ms] Apr 28 01:07:09.096: INFO: Created: latency-svc-dqdtc Apr 28 01:07:09.112: INFO: Got endpoints: latency-svc-dqdtc [573.658623ms] Apr 28 01:07:09.132: INFO: Created: latency-svc-xvlq7 Apr 28 01:07:09.148: INFO: Got endpoints: latency-svc-xvlq7 [587.816598ms] Apr 28 01:07:09.168: INFO: Created: latency-svc-pqkcn Apr 28 01:07:09.203: INFO: Got endpoints: latency-svc-pqkcn [594.282971ms] Apr 28 01:07:09.218: INFO: Created: latency-svc-pnfhh Apr 28 01:07:09.230: INFO: Got endpoints: latency-svc-pnfhh [571.977034ms] Apr 28 01:07:09.247: INFO: Created: latency-svc-8n6zk Apr 28 01:07:09.261: INFO: Got endpoints: latency-svc-8n6zk [574.889228ms] Apr 28 01:07:09.282: INFO: Created: latency-svc-tf5hr Apr 28 01:07:09.297: INFO: Got endpoints: latency-svc-tf5hr [587.597624ms] Apr 28 01:07:09.329: INFO: Created: latency-svc-6x42k Apr 28 01:07:09.332: INFO: Got endpoints: latency-svc-6x42k [586.649939ms] Apr 28 01:07:09.348: INFO: Created: latency-svc-z7nd8 Apr 28 01:07:09.356: INFO: Got endpoints: latency-svc-z7nd8 [530.109657ms] Apr 28 01:07:09.372: INFO: Created: latency-svc-x2lkk Apr 28 01:07:09.397: INFO: Got endpoints: latency-svc-x2lkk [546.416878ms] Apr 28 01:07:09.420: INFO: Created: latency-svc-rvh5z Apr 28 01:07:09.472: INFO: Got endpoints: latency-svc-rvh5z [595.185947ms] Apr 28 01:07:09.486: INFO: Created: latency-svc-99pj8 Apr 28 01:07:09.501: INFO: Got endpoints: latency-svc-99pj8 [555.482137ms] Apr 28 01:07:09.522: INFO: Created: latency-svc-czh8g Apr 28 01:07:09.552: INFO: Got endpoints: latency-svc-czh8g [577.843915ms] Apr 28 01:07:09.604: INFO: Created: latency-svc-6pnml Apr 28 01:07:09.624: INFO: Got endpoints: latency-svc-6pnml [620.61465ms] Apr 28 01:07:09.625: INFO: Created: latency-svc-lxh7v Apr 28 01:07:09.639: INFO: Got endpoints: latency-svc-lxh7v [611.303776ms] Apr 28 01:07:09.655: INFO: Created: latency-svc-wskdv Apr 28 01:07:09.696: INFO: Got endpoints: latency-svc-wskdv [600.103616ms] Apr 28 01:07:09.750: INFO: Created: latency-svc-jxp6m Apr 28 01:07:09.753: INFO: Got endpoints: latency-svc-jxp6m [641.162118ms] Apr 28 01:07:09.774: INFO: Created: latency-svc-v6vjt Apr 28 01:07:09.792: INFO: Got endpoints: latency-svc-v6vjt [643.662551ms] Apr 28 01:07:09.811: INFO: Created: latency-svc-qr2vm Apr 28 01:07:09.835: INFO: Got endpoints: latency-svc-qr2vm [631.961373ms] Apr 28 01:07:09.885: INFO: Created: latency-svc-v278f Apr 28 01:07:09.906: INFO: Got endpoints: latency-svc-v278f [675.870692ms] Apr 28 01:07:09.906: INFO: Created: latency-svc-cl688 Apr 28 01:07:09.919: INFO: Got endpoints: latency-svc-cl688 [658.573518ms] Apr 28 01:07:09.935: INFO: Created: latency-svc-7w8gh Apr 28 01:07:09.960: INFO: Got endpoints: latency-svc-7w8gh [663.47722ms] Apr 28 01:07:10.023: INFO: Created: latency-svc-gx7qz Apr 28 01:07:10.044: INFO: Got endpoints: latency-svc-gx7qz [712.676605ms] Apr 28 01:07:10.045: INFO: Created: latency-svc-wvrmd Apr 28 01:07:10.057: INFO: Got endpoints: latency-svc-wvrmd [700.909594ms] Apr 28 01:07:10.104: INFO: Created: latency-svc-mjwkh Apr 28 01:07:10.118: INFO: Got endpoints: latency-svc-mjwkh [721.493361ms] Apr 28 01:07:10.185: INFO: Created: latency-svc-4m7dz Apr 28 01:07:10.200: INFO: Got endpoints: latency-svc-4m7dz [728.212423ms] Apr 28 01:07:10.200: INFO: Created: latency-svc-cxvkx Apr 28 01:07:10.214: INFO: Got endpoints: latency-svc-cxvkx [712.870875ms] Apr 28 01:07:10.230: INFO: Created: latency-svc-q99bg Apr 28 01:07:10.245: INFO: Got endpoints: latency-svc-q99bg [692.584472ms] Apr 28 01:07:10.261: INFO: Created: latency-svc-lc9p7 Apr 28 01:07:10.274: INFO: Got endpoints: latency-svc-lc9p7 [649.750058ms] Apr 28 01:07:10.359: INFO: Created: latency-svc-6ndnw Apr 28 01:07:10.392: INFO: Got endpoints: latency-svc-6ndnw [752.804734ms] Apr 28 01:07:10.392: INFO: Created: latency-svc-pjddj Apr 28 01:07:10.406: INFO: Got endpoints: latency-svc-pjddj [709.452113ms] Apr 28 01:07:10.406: INFO: Latencies: [34.578883ms 95.454833ms 125.489723ms 173.356861ms 257.468922ms 269.276517ms 299.520788ms 329.603154ms 382.884239ms 412.013497ms 448.223759ms 530.109657ms 544.608523ms 544.965854ms 546.416878ms 555.482137ms 558.875966ms 564.116428ms 564.949685ms 571.977034ms 573.658623ms 574.889228ms 577.843915ms 580.237185ms 586.649939ms 587.498429ms 587.597624ms 587.816598ms 591.287133ms 594.282971ms 595.185947ms 595.364584ms 597.494988ms 600.103616ms 600.560208ms 604.977771ms 606.041276ms 611.099909ms 611.303776ms 612.290772ms 613.489537ms 618.598071ms 619.672939ms 619.887825ms 620.61465ms 622.730181ms 623.004051ms 623.712555ms 625.65905ms 627.473162ms 628.685317ms 629.143597ms 629.242816ms 631.961373ms 632.13679ms 633.255173ms 635.03893ms 639.792889ms 641.162118ms 643.554006ms 643.662551ms 643.676009ms 646.116467ms 646.656191ms 646.907202ms 647.101612ms 649.248878ms 649.750058ms 649.844443ms 651.550131ms 651.713929ms 652.651616ms 653.020928ms 653.374807ms 656.108694ms 656.6125ms 657.759153ms 658.573518ms 658.667759ms 658.871924ms 662.029686ms 663.47722ms 665.138872ms 665.187772ms 671.011229ms 671.262456ms 671.412198ms 673.026755ms 675.870692ms 676.609911ms 677.2754ms 678.956269ms 680.345947ms 681.87857ms 684.109158ms 687.908543ms 688.848018ms 688.888318ms 690.184388ms 691.105003ms 691.747534ms 691.803961ms 692.584472ms 694.744405ms 695.084231ms 695.525251ms 697.077636ms 697.989988ms 699.583248ms 700.909594ms 701.237088ms 702.60761ms 705.380223ms 706.833689ms 706.956616ms 707.178338ms 707.439145ms 707.678127ms 709.194644ms 709.452113ms 709.705133ms 709.887895ms 711.628491ms 712.655141ms 712.676605ms 712.870875ms 712.904549ms 712.950948ms 713.000497ms 713.185441ms 713.332799ms 713.380835ms 713.383309ms 713.510922ms 713.878902ms 718.160911ms 719.264914ms 719.301048ms 719.488063ms 720.093092ms 721.493361ms 724.698036ms 724.854288ms 725.813685ms 728.212423ms 728.916951ms 729.547922ms 729.96537ms 730.290765ms 730.591841ms 731.102304ms 732.153438ms 734.678892ms 736.24514ms 736.688019ms 736.807274ms 738.688294ms 742.39141ms 742.855987ms 743.60492ms 746.151943ms 747.242175ms 747.740754ms 747.809549ms 749.77008ms 752.804734ms 753.003315ms 753.497206ms 755.601467ms 760.461787ms 760.547042ms 760.770497ms 760.816958ms 760.924538ms 770.346621ms 771.186421ms 778.537638ms 779.616657ms 781.117269ms 781.263786ms 787.490345ms 800.705882ms 802.791754ms 802.866803ms 803.962277ms 809.479911ms 814.466318ms 822.068708ms 824.973218ms 831.866495ms 832.31418ms 832.314237ms 833.976196ms 856.278619ms 857.66417ms 874.373122ms 891.481264ms 898.50278ms 899.313302ms 904.841507ms] Apr 28 01:07:10.406: INFO: 50 %ile: 691.747534ms Apr 28 01:07:10.406: INFO: 90 %ile: 787.490345ms Apr 28 01:07:10.406: INFO: 99 %ile: 899.313302ms Apr 28 01:07:10.406: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:07:10.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-1856" for this suite. • [SLOW TEST:13.536 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":275,"completed":231,"skipped":3920,"failed":0} SS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:07:10.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Apr 28 01:07:10.551: INFO: Created pod &Pod{ObjectMeta:{dns-9558 dns-9558 /api/v1/namespaces/dns-9558/pods/dns-9558 ef638d76-2a53-4253-aad7-cf065088299b 11598412 0 2020-04-28 01:07:10 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wmrmb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wmrmb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wmrmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 01:07:10.583: INFO: The status of Pod dns-9558 is Pending, waiting for it to be Running (with Ready = true) Apr 28 01:07:12.587: INFO: The status of Pod dns-9558 is Pending, waiting for it to be Running (with Ready = true) Apr 28 01:07:14.587: INFO: The status of Pod dns-9558 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Apr 28 01:07:14.587: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-9558 PodName:dns-9558 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 01:07:14.587: INFO: >>> kubeConfig: /root/.kube/config I0428 01:07:14.625030 7 log.go:172] (0xc00313ce70) (0xc002a1e780) Create stream I0428 01:07:14.625072 7 log.go:172] (0xc00313ce70) (0xc002a1e780) Stream added, broadcasting: 1 I0428 01:07:14.627155 7 log.go:172] (0xc00313ce70) Reply frame received for 1 I0428 01:07:14.627210 7 log.go:172] (0xc00313ce70) (0xc002a1eb40) Create stream I0428 01:07:14.627232 7 log.go:172] (0xc00313ce70) (0xc002a1eb40) Stream added, broadcasting: 3 I0428 01:07:14.628325 7 log.go:172] (0xc00313ce70) Reply frame received for 3 I0428 01:07:14.628372 7 log.go:172] (0xc00313ce70) (0xc002a1ec80) Create stream I0428 01:07:14.628387 7 log.go:172] (0xc00313ce70) (0xc002a1ec80) Stream added, broadcasting: 5 I0428 01:07:14.629708 7 log.go:172] (0xc00313ce70) Reply frame received for 5 I0428 01:07:14.739987 7 log.go:172] (0xc00313ce70) Data frame received for 3 I0428 01:07:14.740027 7 log.go:172] (0xc002a1eb40) (3) Data frame handling I0428 01:07:14.740054 7 log.go:172] (0xc002a1eb40) (3) Data frame sent I0428 01:07:14.740892 7 log.go:172] (0xc00313ce70) Data frame received for 3 I0428 01:07:14.740928 7 log.go:172] (0xc002a1eb40) (3) Data frame handling I0428 01:07:14.741014 7 log.go:172] (0xc00313ce70) Data frame received for 5 I0428 01:07:14.741027 7 log.go:172] (0xc002a1ec80) (5) Data frame handling I0428 01:07:14.742596 7 log.go:172] (0xc00313ce70) Data frame received for 1 I0428 01:07:14.742636 7 log.go:172] (0xc002a1e780) (1) Data frame handling I0428 01:07:14.742676 7 log.go:172] (0xc002a1e780) (1) Data frame sent I0428 01:07:14.742717 7 log.go:172] (0xc00313ce70) (0xc002a1e780) Stream removed, broadcasting: 1 I0428 01:07:14.742779 7 log.go:172] (0xc00313ce70) Go away received I0428 01:07:14.742868 7 log.go:172] (0xc00313ce70) (0xc002a1e780) Stream removed, broadcasting: 1 I0428 01:07:14.742885 7 log.go:172] (0xc00313ce70) (0xc002a1eb40) Stream removed, broadcasting: 3 I0428 01:07:14.742901 7 log.go:172] (0xc00313ce70) (0xc002a1ec80) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Apr 28 01:07:14.742: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-9558 PodName:dns-9558 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 01:07:14.743: INFO: >>> kubeConfig: /root/.kube/config I0428 01:07:14.774674 7 log.go:172] (0xc002edcb00) (0xc001fb97c0) Create stream I0428 01:07:14.774712 7 log.go:172] (0xc002edcb00) (0xc001fb97c0) Stream added, broadcasting: 1 I0428 01:07:14.776480 7 log.go:172] (0xc002edcb00) Reply frame received for 1 I0428 01:07:14.776518 7 log.go:172] (0xc002edcb00) (0xc002a1ed20) Create stream I0428 01:07:14.776532 7 log.go:172] (0xc002edcb00) (0xc002a1ed20) Stream added, broadcasting: 3 I0428 01:07:14.777757 7 log.go:172] (0xc002edcb00) Reply frame received for 3 I0428 01:07:14.777811 7 log.go:172] (0xc002edcb00) (0xc001fb9860) Create stream I0428 01:07:14.777829 7 log.go:172] (0xc002edcb00) (0xc001fb9860) Stream added, broadcasting: 5 I0428 01:07:14.778802 7 log.go:172] (0xc002edcb00) Reply frame received for 5 I0428 01:07:14.924132 7 log.go:172] (0xc002edcb00) Data frame received for 3 I0428 01:07:14.924158 7 log.go:172] (0xc002a1ed20) (3) Data frame handling I0428 01:07:14.924180 7 log.go:172] (0xc002a1ed20) (3) Data frame sent I0428 01:07:14.925080 7 log.go:172] (0xc002edcb00) Data frame received for 3 I0428 01:07:14.925233 7 log.go:172] (0xc002a1ed20) (3) Data frame handling I0428 01:07:14.925304 7 log.go:172] (0xc002edcb00) Data frame received for 5 I0428 01:07:14.925355 7 log.go:172] (0xc001fb9860) (5) Data frame handling I0428 01:07:14.926994 7 log.go:172] (0xc002edcb00) Data frame received for 1 I0428 01:07:14.927013 7 log.go:172] (0xc001fb97c0) (1) Data frame handling I0428 01:07:14.927028 7 log.go:172] (0xc001fb97c0) (1) Data frame sent I0428 01:07:14.927043 7 log.go:172] (0xc002edcb00) (0xc001fb97c0) Stream removed, broadcasting: 1 I0428 01:07:14.927116 7 log.go:172] (0xc002edcb00) (0xc001fb97c0) Stream removed, broadcasting: 1 I0428 01:07:14.927141 7 log.go:172] (0xc002edcb00) (0xc002a1ed20) Stream removed, broadcasting: 3 I0428 01:07:14.927150 7 log.go:172] (0xc002edcb00) (0xc001fb9860) Stream removed, broadcasting: 5 Apr 28 01:07:14.927: INFO: Deleting pod dns-9558... I0428 01:07:14.927259 7 log.go:172] (0xc002edcb00) Go away received [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:07:14.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9558" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":232,"skipped":3922,"failed":0} SS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:07:14.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 01:07:15.075: INFO: Create a RollingUpdate DaemonSet Apr 28 01:07:15.079: INFO: Check that daemon pods launch on every node of the cluster Apr 28 01:07:15.491: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 01:07:15.495: INFO: Number of nodes with available pods: 0 Apr 28 01:07:15.495: INFO: Node latest-worker is running more than one daemon pod Apr 28 01:07:16.581: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 01:07:16.602: INFO: Number of nodes with available pods: 0 Apr 28 01:07:16.603: INFO: Node latest-worker is running more than one daemon pod Apr 28 01:07:17.550: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 01:07:17.555: INFO: Number of nodes with available pods: 0 Apr 28 01:07:17.555: INFO: Node latest-worker is running more than one daemon pod Apr 28 01:07:18.725: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 01:07:18.802: INFO: Number of nodes with available pods: 0 Apr 28 01:07:18.802: INFO: Node latest-worker is running more than one daemon pod Apr 28 01:07:19.504: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 01:07:19.520: INFO: Number of nodes with available pods: 1 Apr 28 01:07:19.520: INFO: Node latest-worker is running more than one daemon pod Apr 28 01:07:20.499: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 01:07:20.514: INFO: Number of nodes with available pods: 2 Apr 28 01:07:20.514: INFO: Number of running nodes: 2, number of available pods: 2 Apr 28 01:07:20.514: INFO: Update the DaemonSet to trigger a rollout Apr 28 01:07:20.580: INFO: Updating DaemonSet daemon-set Apr 28 01:07:33.658: INFO: Roll back the DaemonSet before rollout is complete Apr 28 01:07:33.669: INFO: Updating DaemonSet daemon-set Apr 28 01:07:33.669: INFO: Make sure DaemonSet rollback is complete Apr 28 01:07:33.674: INFO: Wrong image for pod: daemon-set-zv6t8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 28 01:07:33.674: INFO: Pod daemon-set-zv6t8 is not available Apr 28 01:07:33.699: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 01:07:34.704: INFO: Wrong image for pod: daemon-set-zv6t8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 28 01:07:34.704: INFO: Pod daemon-set-zv6t8 is not available Apr 28 01:07:34.726: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 01:07:35.705: INFO: Pod daemon-set-r4fd2 is not available Apr 28 01:07:35.708: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8013, will wait for the garbage collector to delete the pods Apr 28 01:07:35.796: INFO: Deleting DaemonSet.extensions daemon-set took: 30.372965ms Apr 28 01:07:36.096: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.275196ms Apr 28 01:07:40.407: INFO: Number of nodes with available pods: 0 Apr 28 01:07:40.407: INFO: Number of running nodes: 0, number of available pods: 0 Apr 28 01:07:40.409: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8013/daemonsets","resourceVersion":"11599207"},"items":null} Apr 28 01:07:40.412: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8013/pods","resourceVersion":"11599207"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:07:40.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8013" for this suite. • [SLOW TEST:25.462 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":233,"skipped":3924,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:07:40.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 28 01:07:40.590: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-8132 /api/v1/namespaces/watch-8132/configmaps/e2e-watch-test-resource-version 927fb517-d811-4ac9-8e95-4e6a654390ee 11599215 0 2020-04-28 01:07:40 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 28 01:07:40.590: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-8132 /api/v1/namespaces/watch-8132/configmaps/e2e-watch-test-resource-version 927fb517-d811-4ac9-8e95-4e6a654390ee 11599216 0 2020-04-28 01:07:40 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:07:40.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8132" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":234,"skipped":3929,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:07:40.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 28 01:07:41.186: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 28 01:07:43.194: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632861, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632861, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632861, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632861, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 28 01:07:46.213: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:07:46.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6708" for this suite. STEP: Destroying namespace "webhook-6708-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.081 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":235,"skipped":3946,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:07:46.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Apr 28 01:07:46.768: INFO: >>> kubeConfig: /root/.kube/config Apr 28 01:07:49.684: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:08:00.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5881" for this suite. • [SLOW TEST:13.537 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":236,"skipped":3947,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:08:00.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-66c5ff4b-806a-4ec6-9e79-abb28ba7e14b STEP: Creating a pod to test consume secrets Apr 28 01:08:00.294: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-519dce80-0f14-490b-9125-ee4b30908252" in namespace "projected-5616" to be "Succeeded or Failed" Apr 28 01:08:00.311: INFO: Pod "pod-projected-secrets-519dce80-0f14-490b-9125-ee4b30908252": Phase="Pending", Reason="", readiness=false. Elapsed: 16.683374ms Apr 28 01:08:02.314: INFO: Pod "pod-projected-secrets-519dce80-0f14-490b-9125-ee4b30908252": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020219037s Apr 28 01:08:04.319: INFO: Pod "pod-projected-secrets-519dce80-0f14-490b-9125-ee4b30908252": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024828091s STEP: Saw pod success Apr 28 01:08:04.319: INFO: Pod "pod-projected-secrets-519dce80-0f14-490b-9125-ee4b30908252" satisfied condition "Succeeded or Failed" Apr 28 01:08:04.322: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-519dce80-0f14-490b-9125-ee4b30908252 container projected-secret-volume-test: STEP: delete the pod Apr 28 01:08:04.341: INFO: Waiting for pod pod-projected-secrets-519dce80-0f14-490b-9125-ee4b30908252 to disappear Apr 28 01:08:04.351: INFO: Pod pod-projected-secrets-519dce80-0f14-490b-9125-ee4b30908252 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:08:04.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5616" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":237,"skipped":3992,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:08:04.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 28 01:08:05.126: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 28 01:08:07.135: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632885, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632885, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632885, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632885, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 28 01:08:10.166: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:08:10.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-225" for this suite. STEP: Destroying namespace "webhook-225-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.959 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":238,"skipped":4001,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:08:10.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-bc3a5d4a-f85c-4c88-8cb7-a8a53b7e52a0 STEP: Creating a pod to test consume secrets Apr 28 01:08:10.408: INFO: Waiting up to 5m0s for pod "pod-secrets-9c255cb9-84e1-426c-be7f-e857232e283e" in namespace "secrets-3598" to be "Succeeded or Failed" Apr 28 01:08:10.412: INFO: Pod "pod-secrets-9c255cb9-84e1-426c-be7f-e857232e283e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.667394ms Apr 28 01:08:12.487: INFO: Pod "pod-secrets-9c255cb9-84e1-426c-be7f-e857232e283e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078634128s Apr 28 01:08:14.490: INFO: Pod "pod-secrets-9c255cb9-84e1-426c-be7f-e857232e283e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082339619s STEP: Saw pod success Apr 28 01:08:14.490: INFO: Pod "pod-secrets-9c255cb9-84e1-426c-be7f-e857232e283e" satisfied condition "Succeeded or Failed" Apr 28 01:08:14.493: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-9c255cb9-84e1-426c-be7f-e857232e283e container secret-volume-test: STEP: delete the pod Apr 28 01:08:14.528: INFO: Waiting for pod pod-secrets-9c255cb9-84e1-426c-be7f-e857232e283e to disappear Apr 28 01:08:14.531: INFO: Pod pod-secrets-9c255cb9-84e1-426c-be7f-e857232e283e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:08:14.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3598" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":239,"skipped":4021,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:08:14.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 01:08:14.631: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 28 01:08:17.570: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1966 create -f -' Apr 28 01:08:20.835: INFO: stderr: "" Apr 28 01:08:20.835: INFO: stdout: "e2e-test-crd-publish-openapi-7192-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 28 01:08:20.835: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1966 delete e2e-test-crd-publish-openapi-7192-crds test-cr' Apr 28 01:08:20.942: INFO: stderr: "" Apr 28 01:08:20.942: INFO: stdout: "e2e-test-crd-publish-openapi-7192-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Apr 28 01:08:20.942: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1966 apply -f -' Apr 28 01:08:21.228: INFO: stderr: "" Apr 28 01:08:21.228: INFO: stdout: "e2e-test-crd-publish-openapi-7192-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 28 01:08:21.228: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1966 delete e2e-test-crd-publish-openapi-7192-crds test-cr' Apr 28 01:08:21.339: INFO: stderr: "" Apr 28 01:08:21.339: INFO: stdout: "e2e-test-crd-publish-openapi-7192-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 28 01:08:21.339: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7192-crds' Apr 28 01:08:21.588: INFO: stderr: "" Apr 28 01:08:21.588: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7192-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:08:24.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1966" for this suite. • [SLOW TEST:9.944 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":240,"skipped":4036,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:08:24.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-cacc5d5e-47e2-427b-adb4-22ed25f011e8 STEP: Creating a pod to test consume secrets Apr 28 01:08:24.553: INFO: Waiting up to 5m0s for pod "pod-secrets-67ef28c3-13ce-4598-9cbe-d7c945b20d3d" in namespace "secrets-8908" to be "Succeeded or Failed" Apr 28 01:08:24.562: INFO: Pod "pod-secrets-67ef28c3-13ce-4598-9cbe-d7c945b20d3d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.927444ms Apr 28 01:08:26.567: INFO: Pod "pod-secrets-67ef28c3-13ce-4598-9cbe-d7c945b20d3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0133511s Apr 28 01:08:28.571: INFO: Pod "pod-secrets-67ef28c3-13ce-4598-9cbe-d7c945b20d3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017351481s STEP: Saw pod success Apr 28 01:08:28.571: INFO: Pod "pod-secrets-67ef28c3-13ce-4598-9cbe-d7c945b20d3d" satisfied condition "Succeeded or Failed" Apr 28 01:08:28.574: INFO: Trying to get logs from node latest-worker pod pod-secrets-67ef28c3-13ce-4598-9cbe-d7c945b20d3d container secret-volume-test: STEP: delete the pod Apr 28 01:08:28.628: INFO: Waiting for pod pod-secrets-67ef28c3-13ce-4598-9cbe-d7c945b20d3d to disappear Apr 28 01:08:28.652: INFO: Pod pod-secrets-67ef28c3-13ce-4598-9cbe-d7c945b20d3d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:08:28.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8908" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":241,"skipped":4053,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:08:28.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0428 01:08:38.763877 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 28 01:08:38.763: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:08:38.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3258" for this suite. • [SLOW TEST:10.110 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":242,"skipped":4090,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:08:38.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 28 01:08:38.859: INFO: Waiting up to 5m0s for pod "pod-bfba057f-2341-4279-a266-af35c2f77196" in namespace "emptydir-1833" to be "Succeeded or Failed" Apr 28 01:08:38.879: INFO: Pod "pod-bfba057f-2341-4279-a266-af35c2f77196": Phase="Pending", Reason="", readiness=false. Elapsed: 20.444592ms Apr 28 01:08:40.954: INFO: Pod "pod-bfba057f-2341-4279-a266-af35c2f77196": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094795531s Apr 28 01:08:42.958: INFO: Pod "pod-bfba057f-2341-4279-a266-af35c2f77196": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.09914069s STEP: Saw pod success Apr 28 01:08:42.958: INFO: Pod "pod-bfba057f-2341-4279-a266-af35c2f77196" satisfied condition "Succeeded or Failed" Apr 28 01:08:42.961: INFO: Trying to get logs from node latest-worker pod pod-bfba057f-2341-4279-a266-af35c2f77196 container test-container: STEP: delete the pod Apr 28 01:08:43.002: INFO: Waiting for pod pod-bfba057f-2341-4279-a266-af35c2f77196 to disappear Apr 28 01:08:43.048: INFO: Pod pod-bfba057f-2341-4279-a266-af35c2f77196 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:08:43.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1833" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":243,"skipped":4097,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:08:43.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7185.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-7185.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7185.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7185.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-7185.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7185.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 28 01:08:49.221: INFO: DNS probes using dns-7185/dns-test-2387aa3a-d819-4968-9b5c-7663db84d9ff succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:08:49.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7185" for this suite. • [SLOW TEST:6.225 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":244,"skipped":4105,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:08:49.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 28 01:08:50.050: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 28 01:08:52.168: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632930, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632930, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632930, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723632930, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 28 01:08:55.212: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 01:08:55.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2669-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:08:56.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3607" for this suite. STEP: Destroying namespace "webhook-3607-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.145 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":245,"skipped":4123,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:08:56.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 01:08:56.513: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 28 01:08:56.521: INFO: Number of nodes with available pods: 0 Apr 28 01:08:56.521: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 28 01:08:56.565: INFO: Number of nodes with available pods: 0 Apr 28 01:08:56.565: INFO: Node latest-worker is running more than one daemon pod Apr 28 01:08:57.569: INFO: Number of nodes with available pods: 0 Apr 28 01:08:57.569: INFO: Node latest-worker is running more than one daemon pod Apr 28 01:08:58.570: INFO: Number of nodes with available pods: 0 Apr 28 01:08:58.570: INFO: Node latest-worker is running more than one daemon pod Apr 28 01:08:59.569: INFO: Number of nodes with available pods: 1 Apr 28 01:08:59.569: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 28 01:08:59.601: INFO: Number of nodes with available pods: 1 Apr 28 01:08:59.602: INFO: Number of running nodes: 0, number of available pods: 1 Apr 28 01:09:00.606: INFO: Number of nodes with available pods: 0 Apr 28 01:09:00.606: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 28 01:09:00.632: INFO: Number of nodes with available pods: 0 Apr 28 01:09:00.632: INFO: Node latest-worker is running more than one daemon pod Apr 28 01:09:01.636: INFO: Number of nodes with available pods: 0 Apr 28 01:09:01.636: INFO: Node latest-worker is running more than one daemon pod Apr 28 01:09:02.636: INFO: Number of nodes with available pods: 0 Apr 28 01:09:02.636: INFO: Node latest-worker is running more than one daemon pod Apr 28 01:09:03.636: INFO: Number of nodes with available pods: 0 Apr 28 01:09:03.636: INFO: Node latest-worker is running more than one daemon pod Apr 28 01:09:04.636: INFO: Number of nodes with available pods: 0 Apr 28 01:09:04.636: INFO: Node latest-worker is running more than one daemon pod Apr 28 01:09:05.636: INFO: Number of nodes with available pods: 0 Apr 28 01:09:05.636: INFO: Node latest-worker is running more than one daemon pod Apr 28 01:09:06.636: INFO: Number of nodes with available pods: 1 Apr 28 01:09:06.636: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-35, will wait for the garbage collector to delete the pods Apr 28 01:09:06.703: INFO: Deleting DaemonSet.extensions daemon-set took: 6.639809ms Apr 28 01:09:07.003: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.299884ms Apr 28 01:09:12.806: INFO: Number of nodes with available pods: 0 Apr 28 01:09:12.807: INFO: Number of running nodes: 0, number of available pods: 0 Apr 28 01:09:12.809: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-35/daemonsets","resourceVersion":"11600025"},"items":null} Apr 28 01:09:12.812: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-35/pods","resourceVersion":"11600025"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:09:12.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-35" for this suite. • [SLOW TEST:16.420 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":246,"skipped":4138,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:09:12.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-eea916db-f334-480c-b1fb-744a6409e839 STEP: Creating a pod to test consume secrets Apr 28 01:09:12.937: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7ba1768d-c427-4e44-a594-4f6754855cd5" in namespace "projected-6171" to be "Succeeded or Failed" Apr 28 01:09:12.967: INFO: Pod "pod-projected-secrets-7ba1768d-c427-4e44-a594-4f6754855cd5": Phase="Pending", Reason="", readiness=false. Elapsed: 30.112222ms Apr 28 01:09:14.971: INFO: Pod "pod-projected-secrets-7ba1768d-c427-4e44-a594-4f6754855cd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033853621s Apr 28 01:09:16.975: INFO: Pod "pod-projected-secrets-7ba1768d-c427-4e44-a594-4f6754855cd5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037550768s STEP: Saw pod success Apr 28 01:09:16.975: INFO: Pod "pod-projected-secrets-7ba1768d-c427-4e44-a594-4f6754855cd5" satisfied condition "Succeeded or Failed" Apr 28 01:09:16.977: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-7ba1768d-c427-4e44-a594-4f6754855cd5 container projected-secret-volume-test: STEP: delete the pod Apr 28 01:09:17.018: INFO: Waiting for pod pod-projected-secrets-7ba1768d-c427-4e44-a594-4f6754855cd5 to disappear Apr 28 01:09:17.030: INFO: Pod pod-projected-secrets-7ba1768d-c427-4e44-a594-4f6754855cd5 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:09:17.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6171" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":247,"skipped":4139,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:09:17.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-4290 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4290 STEP: creating replication controller externalsvc in namespace services-4290 I0428 01:09:17.271628 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-4290, replica count: 2 I0428 01:09:20.322089 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0428 01:09:23.322355 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Apr 28 01:09:23.375: INFO: Creating new exec pod Apr 28 01:09:27.431: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-4290 execpodlznzp -- /bin/sh -x -c nslookup clusterip-service' Apr 28 01:09:27.655: INFO: stderr: "I0428 01:09:27.565558 3162 log.go:172] (0xc00003a420) (0xc0009ac000) Create stream\nI0428 01:09:27.565620 3162 log.go:172] (0xc00003a420) (0xc0009ac000) Stream added, broadcasting: 1\nI0428 01:09:27.568181 3162 log.go:172] (0xc00003a420) Reply frame received for 1\nI0428 01:09:27.568225 3162 log.go:172] (0xc00003a420) (0xc0007c32c0) Create stream\nI0428 01:09:27.568235 3162 log.go:172] (0xc00003a420) (0xc0007c32c0) Stream added, broadcasting: 3\nI0428 01:09:27.569524 3162 log.go:172] (0xc00003a420) Reply frame received for 3\nI0428 01:09:27.569562 3162 log.go:172] (0xc00003a420) (0xc0009ac0a0) Create stream\nI0428 01:09:27.569576 3162 log.go:172] (0xc00003a420) (0xc0009ac0a0) Stream added, broadcasting: 5\nI0428 01:09:27.570646 3162 log.go:172] (0xc00003a420) Reply frame received for 5\nI0428 01:09:27.639518 3162 log.go:172] (0xc00003a420) Data frame received for 5\nI0428 01:09:27.639538 3162 log.go:172] (0xc0009ac0a0) (5) Data frame handling\nI0428 01:09:27.639553 3162 log.go:172] (0xc0009ac0a0) (5) Data frame sent\n+ nslookup clusterip-service\nI0428 01:09:27.646981 3162 log.go:172] (0xc00003a420) Data frame received for 3\nI0428 01:09:27.647010 3162 log.go:172] (0xc0007c32c0) (3) Data frame handling\nI0428 01:09:27.647037 3162 log.go:172] (0xc0007c32c0) (3) Data frame sent\nI0428 01:09:27.648137 3162 log.go:172] (0xc00003a420) Data frame received for 3\nI0428 01:09:27.648177 3162 log.go:172] (0xc0007c32c0) (3) Data frame handling\nI0428 01:09:27.648225 3162 log.go:172] (0xc0007c32c0) (3) Data frame sent\nI0428 01:09:27.648510 3162 log.go:172] (0xc00003a420) Data frame received for 5\nI0428 01:09:27.648542 3162 log.go:172] (0xc0009ac0a0) (5) Data frame handling\nI0428 01:09:27.648742 3162 log.go:172] (0xc00003a420) Data frame received for 3\nI0428 01:09:27.648761 3162 log.go:172] (0xc0007c32c0) (3) Data frame handling\nI0428 01:09:27.650809 3162 log.go:172] (0xc00003a420) Data frame received for 1\nI0428 01:09:27.650851 3162 log.go:172] (0xc0009ac000) (1) Data frame handling\nI0428 01:09:27.650881 3162 log.go:172] (0xc0009ac000) (1) Data frame sent\nI0428 01:09:27.650920 3162 log.go:172] (0xc00003a420) (0xc0009ac000) Stream removed, broadcasting: 1\nI0428 01:09:27.650946 3162 log.go:172] (0xc00003a420) Go away received\nI0428 01:09:27.651251 3162 log.go:172] (0xc00003a420) (0xc0009ac000) Stream removed, broadcasting: 1\nI0428 01:09:27.651267 3162 log.go:172] (0xc00003a420) (0xc0007c32c0) Stream removed, broadcasting: 3\nI0428 01:09:27.651275 3162 log.go:172] (0xc00003a420) (0xc0009ac0a0) Stream removed, broadcasting: 5\n" Apr 28 01:09:27.655: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-4290.svc.cluster.local\tcanonical name = externalsvc.services-4290.svc.cluster.local.\nName:\texternalsvc.services-4290.svc.cluster.local\nAddress: 10.96.120.49\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4290, will wait for the garbage collector to delete the pods Apr 28 01:09:27.714: INFO: Deleting ReplicationController externalsvc took: 6.063004ms Apr 28 01:09:27.814: INFO: Terminating ReplicationController externalsvc pods took: 100.18062ms Apr 28 01:09:43.037: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:09:43.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4290" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:26.074 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":248,"skipped":4153,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:09:43.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-dcc57586-362b-4620-a150-ad9a9b278d5e in namespace container-probe-1232 Apr 28 01:09:47.257: INFO: Started pod liveness-dcc57586-362b-4620-a150-ad9a9b278d5e in namespace container-probe-1232 STEP: checking the pod's current state and verifying that restartCount is present Apr 28 01:09:47.260: INFO: Initial restart count of pod liveness-dcc57586-362b-4620-a150-ad9a9b278d5e is 0 Apr 28 01:10:07.455: INFO: Restart count of pod container-probe-1232/liveness-dcc57586-362b-4620-a150-ad9a9b278d5e is now 1 (20.194901448s elapsed) Apr 28 01:10:27.495: INFO: Restart count of pod container-probe-1232/liveness-dcc57586-362b-4620-a150-ad9a9b278d5e is now 2 (40.234389802s elapsed) Apr 28 01:10:47.618: INFO: Restart count of pod container-probe-1232/liveness-dcc57586-362b-4620-a150-ad9a9b278d5e is now 3 (1m0.357362366s elapsed) Apr 28 01:11:07.659: INFO: Restart count of pod container-probe-1232/liveness-dcc57586-362b-4620-a150-ad9a9b278d5e is now 4 (1m20.398995058s elapsed) Apr 28 01:12:17.909: INFO: Restart count of pod container-probe-1232/liveness-dcc57586-362b-4620-a150-ad9a9b278d5e is now 5 (2m30.64879182s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:12:17.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1232" for this suite. • [SLOW TEST:154.853 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":249,"skipped":4173,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:12:17.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 28 01:12:18.885: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 28 01:12:20.894: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723633138, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723633138, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723633139, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723633138, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 28 01:12:23.998: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 01:12:24.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4678-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:12:25.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8559" for this suite. STEP: Destroying namespace "webhook-8559-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.279 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":250,"skipped":4179,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:12:25.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:12:32.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5813" for this suite. • [SLOW TEST:7.135 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":251,"skipped":4195,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:12:32.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 28 01:12:32.480: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 28 01:12:32.503: INFO: Waiting for terminating namespaces to be deleted... Apr 28 01:12:32.506: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 28 01:12:32.520: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 28 01:12:32.520: INFO: Container kindnet-cni ready: true, restart count 0 Apr 28 01:12:32.520: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 28 01:12:32.520: INFO: Container kube-proxy ready: true, restart count 0 Apr 28 01:12:32.520: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 28 01:12:32.537: INFO: pod-adoption from replication-controller-5813 started at 2020-04-28 01:12:25 +0000 UTC (1 container statuses recorded) Apr 28 01:12:32.538: INFO: Container pod-adoption ready: true, restart count 0 Apr 28 01:12:32.538: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 28 01:12:32.538: INFO: Container kindnet-cni ready: true, restart count 0 Apr 28 01:12:32.538: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 28 01:12:32.538: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1609d6a61ce01cad], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.1609d6a61ebd0d93], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:12:33.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9235" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":275,"completed":252,"skipped":4226,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:12:33.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 28 01:12:33.654: INFO: Waiting up to 5m0s for pod "pod-4f40dcde-2ea1-4513-ac1a-0a37a0de6e39" in namespace "emptydir-3878" to be "Succeeded or Failed" Apr 28 01:12:33.671: INFO: Pod "pod-4f40dcde-2ea1-4513-ac1a-0a37a0de6e39": Phase="Pending", Reason="", readiness=false. Elapsed: 17.418502ms Apr 28 01:12:35.676: INFO: Pod "pod-4f40dcde-2ea1-4513-ac1a-0a37a0de6e39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022110653s Apr 28 01:12:37.680: INFO: Pod "pod-4f40dcde-2ea1-4513-ac1a-0a37a0de6e39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026029481s STEP: Saw pod success Apr 28 01:12:37.680: INFO: Pod "pod-4f40dcde-2ea1-4513-ac1a-0a37a0de6e39" satisfied condition "Succeeded or Failed" Apr 28 01:12:37.683: INFO: Trying to get logs from node latest-worker pod pod-4f40dcde-2ea1-4513-ac1a-0a37a0de6e39 container test-container: STEP: delete the pod Apr 28 01:12:37.702: INFO: Waiting for pod pod-4f40dcde-2ea1-4513-ac1a-0a37a0de6e39 to disappear Apr 28 01:12:37.843: INFO: Pod pod-4f40dcde-2ea1-4513-ac1a-0a37a0de6e39 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:12:37.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3878" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":253,"skipped":4235,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:12:37.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-6d50b35e-ebfe-4710-a6a9-89b58626b7a9 STEP: Creating a pod to test consume configMaps Apr 28 01:12:38.101: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b3c85105-33ef-4a14-90cb-e1e2a7bb6509" in namespace "projected-4605" to be "Succeeded or Failed" Apr 28 01:12:38.113: INFO: Pod "pod-projected-configmaps-b3c85105-33ef-4a14-90cb-e1e2a7bb6509": Phase="Pending", Reason="", readiness=false. Elapsed: 11.768471ms Apr 28 01:12:40.178: INFO: Pod "pod-projected-configmaps-b3c85105-33ef-4a14-90cb-e1e2a7bb6509": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076663605s Apr 28 01:12:42.182: INFO: Pod "pod-projected-configmaps-b3c85105-33ef-4a14-90cb-e1e2a7bb6509": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.080831982s STEP: Saw pod success Apr 28 01:12:42.182: INFO: Pod "pod-projected-configmaps-b3c85105-33ef-4a14-90cb-e1e2a7bb6509" satisfied condition "Succeeded or Failed" Apr 28 01:12:42.185: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-b3c85105-33ef-4a14-90cb-e1e2a7bb6509 container projected-configmap-volume-test: STEP: delete the pod Apr 28 01:12:42.222: INFO: Waiting for pod pod-projected-configmaps-b3c85105-33ef-4a14-90cb-e1e2a7bb6509 to disappear Apr 28 01:12:42.239: INFO: Pod pod-projected-configmaps-b3c85105-33ef-4a14-90cb-e1e2a7bb6509 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:12:42.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4605" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":254,"skipped":4282,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:12:42.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 28 01:12:42.319: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7b8ddf5e-5530-44e2-b15a-f9244ad5dbb8" in namespace "projected-1694" to be "Succeeded or Failed" Apr 28 01:12:42.340: INFO: Pod "downwardapi-volume-7b8ddf5e-5530-44e2-b15a-f9244ad5dbb8": Phase="Pending", Reason="", readiness=false. Elapsed: 21.521429ms Apr 28 01:12:44.344: INFO: Pod "downwardapi-volume-7b8ddf5e-5530-44e2-b15a-f9244ad5dbb8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025391457s Apr 28 01:12:46.348: INFO: Pod "downwardapi-volume-7b8ddf5e-5530-44e2-b15a-f9244ad5dbb8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029482768s STEP: Saw pod success Apr 28 01:12:46.348: INFO: Pod "downwardapi-volume-7b8ddf5e-5530-44e2-b15a-f9244ad5dbb8" satisfied condition "Succeeded or Failed" Apr 28 01:12:46.351: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-7b8ddf5e-5530-44e2-b15a-f9244ad5dbb8 container client-container: STEP: delete the pod Apr 28 01:12:46.372: INFO: Waiting for pod downwardapi-volume-7b8ddf5e-5530-44e2-b15a-f9244ad5dbb8 to disappear Apr 28 01:12:46.424: INFO: Pod downwardapi-volume-7b8ddf5e-5530-44e2-b15a-f9244ad5dbb8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:12:46.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1694" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":255,"skipped":4315,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:12:46.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-6101, will wait for the garbage collector to delete the pods Apr 28 01:12:52.582: INFO: Deleting Job.batch foo took: 6.788566ms Apr 28 01:12:52.782: INFO: Terminating Job.batch foo pods took: 200.247604ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:13:33.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6101" for this suite. • [SLOW TEST:46.657 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":256,"skipped":4332,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:13:33.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 01:13:37.226: INFO: Waiting up to 5m0s for pod "client-envvars-0f3bbadc-7202-413f-8990-959efa4d34c6" in namespace "pods-7322" to be "Succeeded or Failed" Apr 28 01:13:37.240: INFO: Pod "client-envvars-0f3bbadc-7202-413f-8990-959efa4d34c6": Phase="Pending", Reason="", readiness=false. Elapsed: 13.995141ms Apr 28 01:13:39.243: INFO: Pod "client-envvars-0f3bbadc-7202-413f-8990-959efa4d34c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017551945s Apr 28 01:13:41.248: INFO: Pod "client-envvars-0f3bbadc-7202-413f-8990-959efa4d34c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022245612s STEP: Saw pod success Apr 28 01:13:41.248: INFO: Pod "client-envvars-0f3bbadc-7202-413f-8990-959efa4d34c6" satisfied condition "Succeeded or Failed" Apr 28 01:13:41.251: INFO: Trying to get logs from node latest-worker2 pod client-envvars-0f3bbadc-7202-413f-8990-959efa4d34c6 container env3cont: STEP: delete the pod Apr 28 01:13:41.271: INFO: Waiting for pod client-envvars-0f3bbadc-7202-413f-8990-959efa4d34c6 to disappear Apr 28 01:13:41.286: INFO: Pod client-envvars-0f3bbadc-7202-413f-8990-959efa4d34c6 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:13:41.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7322" for this suite. • [SLOW TEST:8.199 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":257,"skipped":4337,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:13:41.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 01:13:41.362: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:13:42.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1804" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":275,"completed":258,"skipped":4432,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:13:42.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 01:13:42.659: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Apr 28 01:13:44.704: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:13:44.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8981" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":259,"skipped":4441,"failed":0} ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:13:44.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 28 01:13:44.830: INFO: Waiting up to 5m0s for pod "downwardapi-volume-89569cc2-a3d4-49fc-9597-20745b8651de" in namespace "downward-api-5420" to be "Succeeded or Failed" Apr 28 01:13:44.833: INFO: Pod "downwardapi-volume-89569cc2-a3d4-49fc-9597-20745b8651de": Phase="Pending", Reason="", readiness=false. Elapsed: 3.482969ms Apr 28 01:13:46.981: INFO: Pod "downwardapi-volume-89569cc2-a3d4-49fc-9597-20745b8651de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.151148777s Apr 28 01:13:48.985: INFO: Pod "downwardapi-volume-89569cc2-a3d4-49fc-9597-20745b8651de": Phase="Pending", Reason="", readiness=false. Elapsed: 4.15552514s Apr 28 01:13:51.020: INFO: Pod "downwardapi-volume-89569cc2-a3d4-49fc-9597-20745b8651de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.190350546s STEP: Saw pod success Apr 28 01:13:51.020: INFO: Pod "downwardapi-volume-89569cc2-a3d4-49fc-9597-20745b8651de" satisfied condition "Succeeded or Failed" Apr 28 01:13:51.023: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-89569cc2-a3d4-49fc-9597-20745b8651de container client-container: STEP: delete the pod Apr 28 01:13:51.050: INFO: Waiting for pod downwardapi-volume-89569cc2-a3d4-49fc-9597-20745b8651de to disappear Apr 28 01:13:51.064: INFO: Pod downwardapi-volume-89569cc2-a3d4-49fc-9597-20745b8651de no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:13:51.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5420" for this suite. • [SLOW TEST:6.307 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":260,"skipped":4441,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:13:51.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name projected-secret-test-6a28602d-0b3d-48d7-9031-38b761da0c02 STEP: Creating a pod to test consume secrets Apr 28 01:13:51.385: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f007f3c1-2301-4b4c-9e78-d161ad6bc6b0" in namespace "projected-9371" to be "Succeeded or Failed" Apr 28 01:13:51.390: INFO: Pod "pod-projected-secrets-f007f3c1-2301-4b4c-9e78-d161ad6bc6b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.770758ms Apr 28 01:13:53.394: INFO: Pod "pod-projected-secrets-f007f3c1-2301-4b4c-9e78-d161ad6bc6b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009005225s Apr 28 01:13:55.400: INFO: Pod "pod-projected-secrets-f007f3c1-2301-4b4c-9e78-d161ad6bc6b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014738941s STEP: Saw pod success Apr 28 01:13:55.400: INFO: Pod "pod-projected-secrets-f007f3c1-2301-4b4c-9e78-d161ad6bc6b0" satisfied condition "Succeeded or Failed" Apr 28 01:13:55.403: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-f007f3c1-2301-4b4c-9e78-d161ad6bc6b0 container secret-volume-test: STEP: delete the pod Apr 28 01:13:55.422: INFO: Waiting for pod pod-projected-secrets-f007f3c1-2301-4b4c-9e78-d161ad6bc6b0 to disappear Apr 28 01:13:55.426: INFO: Pod pod-projected-secrets-f007f3c1-2301-4b4c-9e78-d161ad6bc6b0 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:13:55.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9371" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":261,"skipped":4454,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:13:55.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 28 01:13:55.495: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2568ceef-77bb-40d4-af85-fa9243e1abac" in namespace "downward-api-229" to be "Succeeded or Failed" Apr 28 01:13:55.544: INFO: Pod "downwardapi-volume-2568ceef-77bb-40d4-af85-fa9243e1abac": Phase="Pending", Reason="", readiness=false. Elapsed: 49.537438ms Apr 28 01:13:57.548: INFO: Pod "downwardapi-volume-2568ceef-77bb-40d4-af85-fa9243e1abac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053324502s Apr 28 01:13:59.552: INFO: Pod "downwardapi-volume-2568ceef-77bb-40d4-af85-fa9243e1abac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05757213s STEP: Saw pod success Apr 28 01:13:59.552: INFO: Pod "downwardapi-volume-2568ceef-77bb-40d4-af85-fa9243e1abac" satisfied condition "Succeeded or Failed" Apr 28 01:13:59.556: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-2568ceef-77bb-40d4-af85-fa9243e1abac container client-container: STEP: delete the pod Apr 28 01:13:59.618: INFO: Waiting for pod downwardapi-volume-2568ceef-77bb-40d4-af85-fa9243e1abac to disappear Apr 28 01:13:59.625: INFO: Pod downwardapi-volume-2568ceef-77bb-40d4-af85-fa9243e1abac no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:13:59.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-229" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":262,"skipped":4536,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:13:59.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206 STEP: creating the pod Apr 28 01:13:59.692: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9851' Apr 28 01:14:00.003: INFO: stderr: "" Apr 28 01:14:00.003: INFO: stdout: "pod/pause created\n" Apr 28 01:14:00.003: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 28 01:14:00.003: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9851" to be "running and ready" Apr 28 01:14:00.013: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 9.439154ms Apr 28 01:14:02.035: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031833307s Apr 28 01:14:04.039: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.035583853s Apr 28 01:14:04.039: INFO: Pod "pause" satisfied condition "running and ready" Apr 28 01:14:04.039: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: adding the label testing-label with value testing-label-value to a pod Apr 28 01:14:04.039: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-9851' Apr 28 01:14:04.155: INFO: stderr: "" Apr 28 01:14:04.155: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 28 01:14:04.155: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9851' Apr 28 01:14:04.250: INFO: stderr: "" Apr 28 01:14:04.250: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 28 01:14:04.250: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-9851' Apr 28 01:14:04.345: INFO: stderr: "" Apr 28 01:14:04.345: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 28 01:14:04.345: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9851' Apr 28 01:14:04.444: INFO: stderr: "" Apr 28 01:14:04.444: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213 STEP: using delete to clean up resources Apr 28 01:14:04.444: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9851' Apr 28 01:14:04.566: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 28 01:14:04.567: INFO: stdout: "pod \"pause\" force deleted\n" Apr 28 01:14:04.567: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-9851' Apr 28 01:14:04.992: INFO: stderr: "No resources found in kubectl-9851 namespace.\n" Apr 28 01:14:04.992: INFO: stdout: "" Apr 28 01:14:04.992: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-9851 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 28 01:14:05.081: INFO: stderr: "" Apr 28 01:14:05.081: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:14:05.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9851" for this suite. • [SLOW TEST:5.475 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":275,"completed":263,"skipped":4548,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:14:05.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:14:16.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2099" for this suite. • [SLOW TEST:11.087 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":264,"skipped":4565,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:14:16.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override arguments Apr 28 01:14:16.272: INFO: Waiting up to 5m0s for pod "client-containers-51b497e7-db83-4150-aea6-61f18eedc6d4" in namespace "containers-5519" to be "Succeeded or Failed" Apr 28 01:14:16.277: INFO: Pod "client-containers-51b497e7-db83-4150-aea6-61f18eedc6d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.82694ms Apr 28 01:14:18.281: INFO: Pod "client-containers-51b497e7-db83-4150-aea6-61f18eedc6d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008627604s Apr 28 01:14:20.285: INFO: Pod "client-containers-51b497e7-db83-4150-aea6-61f18eedc6d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013048849s STEP: Saw pod success Apr 28 01:14:20.285: INFO: Pod "client-containers-51b497e7-db83-4150-aea6-61f18eedc6d4" satisfied condition "Succeeded or Failed" Apr 28 01:14:20.288: INFO: Trying to get logs from node latest-worker2 pod client-containers-51b497e7-db83-4150-aea6-61f18eedc6d4 container test-container: STEP: delete the pod Apr 28 01:14:20.302: INFO: Waiting for pod client-containers-51b497e7-db83-4150-aea6-61f18eedc6d4 to disappear Apr 28 01:14:20.307: INFO: Pod client-containers-51b497e7-db83-4150-aea6-61f18eedc6d4 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:14:20.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5519" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":265,"skipped":4600,"failed":0} S ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:14:20.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 01:14:20.393: INFO: Creating deployment "webserver-deployment" Apr 28 01:14:20.397: INFO: Waiting for observed generation 1 Apr 28 01:14:22.417: INFO: Waiting for all required pods to come up Apr 28 01:14:22.422: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 28 01:14:32.432: INFO: Waiting for deployment "webserver-deployment" to complete Apr 28 01:14:32.438: INFO: Updating deployment "webserver-deployment" with a non-existent image Apr 28 01:14:32.444: INFO: Updating deployment webserver-deployment Apr 28 01:14:32.444: INFO: Waiting for observed generation 2 Apr 28 01:14:34.778: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 28 01:14:34.822: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 28 01:14:34.824: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 28 01:14:34.832: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 28 01:14:34.832: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 28 01:14:34.834: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 28 01:14:34.837: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Apr 28 01:14:34.837: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Apr 28 01:14:34.843: INFO: Updating deployment webserver-deployment Apr 28 01:14:34.843: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Apr 28 01:14:35.039: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 28 01:14:35.179: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 28 01:14:35.408: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-4975 /apis/apps/v1/namespaces/deployment-4975/deployments/webserver-deployment 8cd6119a-c5d1-4beb-842f-e5d9eab8481e 11601845 3 2020-04-28 01:14:20 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0052f70b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-04-28 01:14:32 +0000 UTC,LastTransitionTime:2020-04-28 01:14:20 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-28 01:14:35 +0000 UTC,LastTransitionTime:2020-04-28 01:14:35 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Apr 28 01:14:35.474: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-4975 /apis/apps/v1/namespaces/deployment-4975/replicasets/webserver-deployment-c7997dcc8 e512d923-ce98-445a-9798-427e0b37da5d 11601875 3 2020-04-28 01:14:32 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 8cd6119a-c5d1-4beb-842f-e5d9eab8481e 0xc0052f7847 0xc0052f7848}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0052f78b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 28 01:14:35.474: INFO: All old ReplicaSets of Deployment "webserver-deployment": Apr 28 01:14:35.474: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-4975 /apis/apps/v1/namespaces/deployment-4975/replicasets/webserver-deployment-595b5b9587 43070d44-0142-4ed1-ae91-13ff63162d9b 11601885 3 2020-04-28 01:14:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 8cd6119a-c5d1-4beb-842f-e5d9eab8481e 0xc0052f7787 0xc0052f7788}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0052f77e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Apr 28 01:14:35.632: INFO: Pod "webserver-deployment-595b5b9587-2kprg" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2kprg webserver-deployment-595b5b9587- deployment-4975 /api/v1/namespaces/deployment-4975/pods/webserver-deployment-595b5b9587-2kprg 40f4a98f-0251-4ca4-b291-4893892b1f47 11601694 0 2020-04-28 01:14:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 43070d44-0142-4ed1-ae91-13ff63162d9b 0xc003597cb0 0xc003597cb1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-thnns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-thnns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-thnns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.204,StartTime:2020-04-28 01:14:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-28 01:14:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5263c7e4ecac9428b9caf1e954eda171f44ec412b2d4698eea9e33eba91e0e9f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.204,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 01:14:35.632: INFO: Pod "webserver-deployment-595b5b9587-4z8f8" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4z8f8 webserver-deployment-595b5b9587- deployment-4975 /api/v1/namespaces/deployment-4975/pods/webserver-deployment-595b5b9587-4z8f8 de15e7d4-8871-4a5e-8461-8d357f97bc9c 11601895 0 2020-04-28 01:14:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 43070d44-0142-4ed1-ae91-13ff63162d9b 0xc003597e27 0xc003597e28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-thnns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-thnns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-thnns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-28 01:14:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 01:14:35.633: INFO: Pod "webserver-deployment-595b5b9587-59x5c" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-59x5c webserver-deployment-595b5b9587- deployment-4975 /api/v1/namespaces/deployment-4975/pods/webserver-deployment-595b5b9587-59x5c 342e423f-eaa6-4b0a-a35a-ca9818f965eb 11601854 0 2020-04-28 01:14:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 43070d44-0142-4ed1-ae91-13ff63162d9b 0xc003597fa7 0xc003597fa8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-thnns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-thnns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-thnns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 01:14:35.633: INFO: Pod "webserver-deployment-595b5b9587-7qtsk" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7qtsk webserver-deployment-595b5b9587- deployment-4975 /api/v1/namespaces/deployment-4975/pods/webserver-deployment-595b5b9587-7qtsk 5e40dc60-9cde-4f57-95f6-b172379b7c18 11601857 0 2020-04-28 01:14:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 43070d44-0142-4ed1-ae91-13ff63162d9b 0xc003a8c0c7 0xc003a8c0c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-thnns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-thnns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-thnns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 01:14:35.633: INFO: Pod "webserver-deployment-595b5b9587-cdcjm" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-cdcjm webserver-deployment-595b5b9587- deployment-4975 /api/v1/namespaces/deployment-4975/pods/webserver-deployment-595b5b9587-cdcjm 6fc2f657-c000-4bc6-84cd-ef7379a1b227 11601890 0 2020-04-28 01:14:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 43070d44-0142-4ed1-ae91-13ff63162d9b 0xc003a8c1e7 0xc003a8c1e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-thnns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-thnns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-thnns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 01:14:35.633: INFO: Pod "webserver-deployment-595b5b9587-cf97b" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-cf97b webserver-deployment-595b5b9587- deployment-4975 /api/v1/namespaces/deployment-4975/pods/webserver-deployment-595b5b9587-cf97b 61d17d95-a5b0-4717-aaba-94bf0ee23c01 11601889 0 2020-04-28 01:14:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 43070d44-0142-4ed1-ae91-13ff63162d9b 0xc003a8c307 0xc003a8c308}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-thnns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-thnns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-thnns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 01:14:35.633: INFO: Pod "webserver-deployment-595b5b9587-cklzl" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-cklzl webserver-deployment-595b5b9587- deployment-4975 /api/v1/namespaces/deployment-4975/pods/webserver-deployment-595b5b9587-cklzl fcc043c2-ca6a-45a0-a96e-f6440238bbfa 11601888 0 2020-04-28 01:14:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 43070d44-0142-4ed1-ae91-13ff63162d9b 0xc003a8c427 0xc003a8c428}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-thnns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-thnns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-thnns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 01:14:35.634: INFO: Pod "webserver-deployment-595b5b9587-dn9dr" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dn9dr webserver-deployment-595b5b9587- deployment-4975 /api/v1/namespaces/deployment-4975/pods/webserver-deployment-595b5b9587-dn9dr bb8254aa-58e8-4967-8f0f-a9a5838d2f3d 11601887 0 2020-04-28 01:14:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 43070d44-0142-4ed1-ae91-13ff63162d9b 0xc003a8c5d7 0xc003a8c5d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-thnns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-thnns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-thnns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 01:14:35.634: INFO: Pod "webserver-deployment-595b5b9587-dtxmq" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dtxmq webserver-deployment-595b5b9587- deployment-4975 /api/v1/namespaces/deployment-4975/pods/webserver-deployment-595b5b9587-dtxmq b9717b9c-8b9f-4e5f-a335-6f9411a76c35 11601745 0 2020-04-28 01:14:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 43070d44-0142-4ed1-ae91-13ff63162d9b 0xc003a8c817 0xc003a8c818}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-thnns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-thnns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-thnns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.208,StartTime:2020-04-28 01:14:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-28 01:14:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ab8573e5f3df0ad3f04e8da8e6c4a60a97fd59afb4f8a46167c2d17400b663ed,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.208,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 01:14:35.634: INFO: Pod "webserver-deployment-595b5b9587-h7xh6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-h7xh6 webserver-deployment-595b5b9587- deployment-4975 /api/v1/namespaces/deployment-4975/pods/webserver-deployment-595b5b9587-h7xh6 e5868204-126f-4cc9-abf2-d1582abad5ec 11601860 0 2020-04-28 01:14:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 43070d44-0142-4ed1-ae91-13ff63162d9b 0xc003a8c9c7 0xc003a8c9c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-thnns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-thnns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-thnns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 01:14:35.634: INFO: Pod "webserver-deployment-595b5b9587-hlnm6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hlnm6 webserver-deployment-595b5b9587- deployment-4975 /api/v1/namespaces/deployment-4975/pods/webserver-deployment-595b5b9587-hlnm6 42440718-40b6-4f74-ad3d-d28bf110e47f 11601874 0 2020-04-28 01:14:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 43070d44-0142-4ed1-ae91-13ff63162d9b 0xc003a8cae7 0xc003a8cae8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-thnns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-thnns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-thnns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 01:14:35.634: INFO: Pod "webserver-deployment-595b5b9587-htwlh" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-htwlh webserver-deployment-595b5b9587- deployment-4975 /api/v1/namespaces/deployment-4975/pods/webserver-deployment-595b5b9587-htwlh 305b918a-935c-4c0d-81aa-2216358f182b 11601859 0 2020-04-28 01:14:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 43070d44-0142-4ed1-ae91-13ff63162d9b 0xc003a8cc07 0xc003a8cc08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-thnns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-thnns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-thnns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 01:14:35.635: INFO: Pod "webserver-deployment-595b5b9587-jkt68" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jkt68 webserver-deployment-595b5b9587- deployment-4975 /api/v1/namespaces/deployment-4975/pods/webserver-deployment-595b5b9587-jkt68 d000aea5-4416-4c47-8532-f5b2596a60f8 11601738 0 2020-04-28 01:14:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 43070d44-0142-4ed1-ae91-13ff63162d9b 0xc003a8cd27 0xc003a8cd28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-thnns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-thnns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-thnns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.206,StartTime:2020-04-28 01:14:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-28 01:14:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://82beb67630809dfeddc93e7e0b6ed6b32e22d3e38cfd1ed6811ad5af1d9dcf0f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.206,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 01:14:35.635: INFO: Pod "webserver-deployment-595b5b9587-kw6g6" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-kw6g6 webserver-deployment-595b5b9587- deployment-4975 /api/v1/namespaces/deployment-4975/pods/webserver-deployment-595b5b9587-kw6g6 62f0f3bd-a0fd-4173-b9ad-82dd67b7772f 11601714 0 2020-04-28 01:14:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 43070d44-0142-4ed1-ae91-13ff63162d9b 0xc003a8cea7 0xc003a8cea8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-thnns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-thnns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-thnns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.204,StartTime:2020-04-28 01:14:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-28 01:14:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0b4c8208ca3d197d3dea462febdf637ecf25480aa7f5dcc9276765c77e1238b9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.204,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 01:14:35.635: INFO: Pod "webserver-deployment-595b5b9587-pzgvx" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-pzgvx webserver-deployment-595b5b9587- deployment-4975 /api/v1/namespaces/deployment-4975/pods/webserver-deployment-595b5b9587-pzgvx febc423d-0226-4d04-92f0-7f61ef24d25c 11601853 0 2020-04-28 01:14:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 43070d44-0142-4ed1-ae91-13ff63162d9b 0xc003a8d027 0xc003a8d028}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-thnns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-thnns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-thnns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 01:14:35.635: INFO: Pod "webserver-deployment-595b5b9587-qdh2h" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qdh2h webserver-deployment-595b5b9587- deployment-4975 /api/v1/namespaces/deployment-4975/pods/webserver-deployment-595b5b9587-qdh2h a79817e2-6253-4d77-94a4-efbf6ee8050c 11601752 0 2020-04-28 01:14:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 43070d44-0142-4ed1-ae91-13ff63162d9b 0xc003a8d157 0xc003a8d158}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-thnns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-thnns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-thnns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.208,StartTime:2020-04-28 01:14:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-28 01:14:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7610d5e5e497f73ded5264a5d2a896d476d3adfde1089dc39138160d280a1d17,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.208,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 01:14:35.636: INFO: Pod "webserver-deployment-595b5b9587-rrw5z" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rrw5z webserver-deployment-595b5b9587- deployment-4975 /api/v1/namespaces/deployment-4975/pods/webserver-deployment-595b5b9587-rrw5z c527f374-d58e-4a87-b6b4-41f0d366b109 11601722 0 2020-04-28 01:14:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 43070d44-0142-4ed1-ae91-13ff63162d9b 0xc003a8d2d7 0xc003a8d2d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-thnns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-thnns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-thnns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.205,StartTime:2020-04-28 01:14:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-28 01:14:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7fa423aeaf43c8757efdf35665f149bb368538cc9a6848263e3fc223e2f78224,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.205,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 01:14:35.636: INFO: Pod "webserver-deployment-595b5b9587-s58h5" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-s58h5 webserver-deployment-595b5b9587- deployment-4975 /api/v1/namespaces/deployment-4975/pods/webserver-deployment-595b5b9587-s58h5 aae372b0-3b35-4e05-adfc-a8669c0399cd 11601741 0 2020-04-28 01:14:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 43070d44-0142-4ed1-ae91-13ff63162d9b 0xc003a8d457 0xc003a8d458}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-thnns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-thnns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-thnns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.207,StartTime:2020-04-28 01:14:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-28 01:14:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://17463068545e57fcb82e04ad376617d4914cb1c7ef7e975ce227abb7d8e1757b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.207,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 01:14:35.636: INFO: Pod "webserver-deployment-595b5b9587-swhvj" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-swhvj webserver-deployment-595b5b9587- deployment-4975 /api/v1/namespaces/deployment-4975/pods/webserver-deployment-595b5b9587-swhvj ba0fe2bc-1be4-4904-8438-c65b13e89bff 11601723 0 2020-04-28 01:14:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 43070d44-0142-4ed1-ae91-13ff63162d9b 0xc003a8d5d7 0xc003a8d5d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-thnns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-thnns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-thnns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.205,StartTime:2020-04-28 01:14:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-28 01:14:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://66a0c46e7c0b63d3455ee30bcb128df62a719feb912c9eed5096c682adb62085,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.205,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 01:14:35.637: INFO: Pod "webserver-deployment-595b5b9587-wt7f6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wt7f6 webserver-deployment-595b5b9587- deployment-4975 /api/v1/namespaces/deployment-4975/pods/webserver-deployment-595b5b9587-wt7f6 f1b7202a-663f-48ee-b2c6-a3dacc5c6226 11601891 0 2020-04-28 01:14:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 43070d44-0142-4ed1-ae91-13ff63162d9b 0xc003a8d757 0xc003a8d758}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-thnns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-thnns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-thnns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 01:14:35.637: INFO: Pod "webserver-deployment-c7997dcc8-45cwh" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-45cwh webserver-deployment-c7997dcc8- deployment-4975 /api/v1/namespaces/deployment-4975/pods/webserver-deployment-c7997dcc8-45cwh aa439eed-7047-4eef-9fe7-fe926d7f43d5 11601872 0 2020-04-28 01:14:35 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e512d923-ce98-445a-9798-427e0b37da5d 0xc003a8d877 0xc003a8d878}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-thnns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-thnns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-thnns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 01:14:35.637: INFO: Pod "webserver-deployment-c7997dcc8-64dt7" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-64dt7 webserver-deployment-c7997dcc8- deployment-4975 /api/v1/namespaces/deployment-4975/pods/webserver-deployment-c7997dcc8-64dt7 61c8b499-6e9e-44b1-a6c8-19128b0ce5d0 11601894 0 2020-04-28 01:14:35 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e512d923-ce98-445a-9798-427e0b37da5d 0xc003a8d9a7 0xc003a8d9a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-thnns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-thnns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-thnns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-28 01:14:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 01:14:35.637: INFO: Pod "webserver-deployment-c7997dcc8-7h2g9" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7h2g9 webserver-deployment-c7997dcc8- deployment-4975 /api/v1/namespaces/deployment-4975/pods/webserver-deployment-c7997dcc8-7h2g9 8119b92a-8392-4d27-bbf2-64c37e0750bf 11601816 0 2020-04-28 01:14:32 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e512d923-ce98-445a-9798-427e0b37da5d 0xc003a8db27 0xc003a8db28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-thnns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-thnns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-thnns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-28 01:14:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 01:14:35.637: INFO: Pod "webserver-deployment-c7997dcc8-8glp2" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8glp2 webserver-deployment-c7997dcc8- deployment-4975 /api/v1/namespaces/deployment-4975/pods/webserver-deployment-c7997dcc8-8glp2 ac5ba4c8-ab68-4d26-a4f0-4422d92b527a 11601851 0 2020-04-28 01:14:35 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e512d923-ce98-445a-9798-427e0b37da5d 0xc003a8dca7 0xc003a8dca8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-thnns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-thnns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-thnns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 01:14:35.638: INFO: Pod "webserver-deployment-c7997dcc8-bbb7s" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-bbb7s webserver-deployment-c7997dcc8- deployment-4975 /api/v1/namespaces/deployment-4975/pods/webserver-deployment-c7997dcc8-bbb7s 12ce7e91-b84a-4035-9fa5-c2b07e2ddacd 11601798 0 2020-04-28 01:14:32 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e512d923-ce98-445a-9798-427e0b37da5d 0xc003a8ddd7 0xc003a8ddd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-thnns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-thnns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-thnns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-28 01:14:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 01:14:35.638: INFO: Pod "webserver-deployment-c7997dcc8-d74bd" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-d74bd webserver-deployment-c7997dcc8- deployment-4975 /api/v1/namespaces/deployment-4975/pods/webserver-deployment-c7997dcc8-d74bd 374c3445-3b12-4d2d-b060-b2fdde9865a5 11601866 0 2020-04-28 01:14:35 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e512d923-ce98-445a-9798-427e0b37da5d 0xc003a8df57 0xc003a8df58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-thnns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-thnns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-thnns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 01:14:35.638: INFO: Pod "webserver-deployment-c7997dcc8-hd4b7" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hd4b7 webserver-deployment-c7997dcc8- deployment-4975 /api/v1/namespaces/deployment-4975/pods/webserver-deployment-c7997dcc8-hd4b7 4d55c27e-82b7-4f68-82d3-23aef07f9d66 11601819 0 2020-04-28 01:14:32 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e512d923-ce98-445a-9798-427e0b37da5d 0xc00293c197 0xc00293c198}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-thnns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-thnns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-thnns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-28 01:14:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 01:14:35.638: INFO: Pod "webserver-deployment-c7997dcc8-kzq2f" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-kzq2f webserver-deployment-c7997dcc8- deployment-4975 /api/v1/namespaces/deployment-4975/pods/webserver-deployment-c7997dcc8-kzq2f 726178fb-d3b5-495e-ac16-a7b5eb0de335 11601849 0 2020-04-28 01:14:35 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e512d923-ce98-445a-9798-427e0b37da5d 0xc00293c637 0xc00293c638}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-thnns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-thnns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-thnns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 01:14:35.638: INFO: Pod "webserver-deployment-c7997dcc8-pdblb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-pdblb webserver-deployment-c7997dcc8- deployment-4975 /api/v1/namespaces/deployment-4975/pods/webserver-deployment-c7997dcc8-pdblb e975f4fb-6e0f-414e-9df6-a3a2c5351e89 11601873 0 2020-04-28 01:14:35 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e512d923-ce98-445a-9798-427e0b37da5d 0xc00293c897 0xc00293c898}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-thnns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-thnns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-thnns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 01:14:35.639: INFO: Pod "webserver-deployment-c7997dcc8-r5wms" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-r5wms webserver-deployment-c7997dcc8- deployment-4975 /api/v1/namespaces/deployment-4975/pods/webserver-deployment-c7997dcc8-r5wms 5cc4f286-e69b-4bac-8830-78f132ba9179 11601881 0 2020-04-28 01:14:35 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e512d923-ce98-445a-9798-427e0b37da5d 0xc00293ccf7 0xc00293ccf8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-thnns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-thnns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-thnns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 01:14:35.639: INFO: Pod "webserver-deployment-c7997dcc8-spfpj" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-spfpj webserver-deployment-c7997dcc8- deployment-4975 /api/v1/namespaces/deployment-4975/pods/webserver-deployment-c7997dcc8-spfpj 460666d5-c58f-4dae-bf47-e5dfcaeec71f 11601790 0 2020-04-28 01:14:32 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e512d923-ce98-445a-9798-427e0b37da5d 0xc00293d287 0xc00293d288}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-thnns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-thnns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-thnns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-28 01:14:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 01:14:35.639: INFO: Pod "webserver-deployment-c7997dcc8-vk9s6" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vk9s6 webserver-deployment-c7997dcc8- deployment-4975 /api/v1/namespaces/deployment-4975/pods/webserver-deployment-c7997dcc8-vk9s6 ba9417fd-b87c-425b-98db-70ad9f7b2db1 11601861 0 2020-04-28 01:14:35 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e512d923-ce98-445a-9798-427e0b37da5d 0xc00293d407 0xc00293d408}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-thnns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-thnns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-thnns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 01:14:35.639: INFO: Pod "webserver-deployment-c7997dcc8-zn7sd" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-zn7sd webserver-deployment-c7997dcc8- deployment-4975 /api/v1/namespaces/deployment-4975/pods/webserver-deployment-c7997dcc8-zn7sd e039709b-aa88-48d7-a306-b03790d8d428 11601800 0 2020-04-28 01:14:32 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e512d923-ce98-445a-9798-427e0b37da5d 0xc00293d717 0xc00293d718}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-thnns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-thnns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-thnns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 01:14:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-28 01:14:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:14:35.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4975" for this suite. • [SLOW TEST:15.544 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":266,"skipped":4601,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:14:35.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 01:14:36.421: INFO: (0) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 270.922316ms) Apr 28 01:14:36.465: INFO: (1) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 43.850884ms) Apr 28 01:14:36.470: INFO: (2) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 5.253918ms) Apr 28 01:14:36.476: INFO: (3) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 5.25553ms) Apr 28 01:14:36.512: INFO: (4) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 36.604188ms) Apr 28 01:14:36.524: INFO: (5) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 11.424414ms) Apr 28 01:14:36.527: INFO: (6) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.072068ms) Apr 28 01:14:36.530: INFO: (7) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.996524ms) Apr 28 01:14:36.533: INFO: (8) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.974635ms) Apr 28 01:14:36.536: INFO: (9) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.786015ms) Apr 28 01:14:36.539: INFO: (10) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.204602ms) Apr 28 01:14:36.542: INFO: (11) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.765697ms) Apr 28 01:14:36.545: INFO: (12) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.754007ms) Apr 28 01:14:36.548: INFO: (13) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.462189ms) Apr 28 01:14:36.551: INFO: (14) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.02962ms) Apr 28 01:14:36.554: INFO: (15) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.315394ms) Apr 28 01:14:36.558: INFO: (16) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.145865ms) Apr 28 01:14:36.561: INFO: (17) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.966697ms) Apr 28 01:14:36.575: INFO: (18) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 14.243914ms) Apr 28 01:14:36.578: INFO: (19) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.439251ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:14:36.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-6288" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":275,"completed":267,"skipped":4612,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:14:36.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-ddfe05e4-7040-4ece-b880-be75517eb042 STEP: Creating a pod to test consume secrets Apr 28 01:14:36.710: INFO: Waiting up to 5m0s for pod "pod-secrets-9fc1712d-acc7-4541-b367-0e75b0800e9d" in namespace "secrets-8691" to be "Succeeded or Failed" Apr 28 01:14:36.728: INFO: Pod "pod-secrets-9fc1712d-acc7-4541-b367-0e75b0800e9d": Phase="Pending", Reason="", readiness=false. Elapsed: 17.223613ms Apr 28 01:14:38.850: INFO: Pod "pod-secrets-9fc1712d-acc7-4541-b367-0e75b0800e9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139976508s Apr 28 01:14:41.505: INFO: Pod "pod-secrets-9fc1712d-acc7-4541-b367-0e75b0800e9d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.79504508s Apr 28 01:14:43.736: INFO: Pod "pod-secrets-9fc1712d-acc7-4541-b367-0e75b0800e9d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.025999532s Apr 28 01:14:45.760: INFO: Pod "pod-secrets-9fc1712d-acc7-4541-b367-0e75b0800e9d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.049563485s Apr 28 01:14:47.791: INFO: Pod "pod-secrets-9fc1712d-acc7-4541-b367-0e75b0800e9d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.080410143s Apr 28 01:14:49.856: INFO: Pod "pod-secrets-9fc1712d-acc7-4541-b367-0e75b0800e9d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.145764808s Apr 28 01:14:52.092: INFO: Pod "pod-secrets-9fc1712d-acc7-4541-b367-0e75b0800e9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.381811546s STEP: Saw pod success Apr 28 01:14:52.092: INFO: Pod "pod-secrets-9fc1712d-acc7-4541-b367-0e75b0800e9d" satisfied condition "Succeeded or Failed" Apr 28 01:14:52.210: INFO: Trying to get logs from node latest-worker pod pod-secrets-9fc1712d-acc7-4541-b367-0e75b0800e9d container secret-volume-test: STEP: delete the pod Apr 28 01:14:53.190: INFO: Waiting for pod pod-secrets-9fc1712d-acc7-4541-b367-0e75b0800e9d to disappear Apr 28 01:14:53.202: INFO: Pod pod-secrets-9fc1712d-acc7-4541-b367-0e75b0800e9d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:14:53.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8691" for this suite. STEP: Destroying namespace "secret-namespace-1054" for this suite. • [SLOW TEST:16.712 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":268,"skipped":4625,"failed":0} SSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:14:53.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 28 01:14:53.584: INFO: Waiting up to 5m0s for pod "downward-api-5ab88eb7-0f68-4579-a480-f37ba4a3b539" in namespace "downward-api-8196" to be "Succeeded or Failed" Apr 28 01:14:53.669: INFO: Pod "downward-api-5ab88eb7-0f68-4579-a480-f37ba4a3b539": Phase="Pending", Reason="", readiness=false. Elapsed: 84.610739ms Apr 28 01:14:55.673: INFO: Pod "downward-api-5ab88eb7-0f68-4579-a480-f37ba4a3b539": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088845806s Apr 28 01:14:57.677: INFO: Pod "downward-api-5ab88eb7-0f68-4579-a480-f37ba4a3b539": Phase="Running", Reason="", readiness=true. Elapsed: 4.092891464s Apr 28 01:14:59.681: INFO: Pod "downward-api-5ab88eb7-0f68-4579-a480-f37ba4a3b539": Phase="Running", Reason="", readiness=true. Elapsed: 6.096808758s Apr 28 01:15:01.695: INFO: Pod "downward-api-5ab88eb7-0f68-4579-a480-f37ba4a3b539": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.110145688s STEP: Saw pod success Apr 28 01:15:01.695: INFO: Pod "downward-api-5ab88eb7-0f68-4579-a480-f37ba4a3b539" satisfied condition "Succeeded or Failed" Apr 28 01:15:01.698: INFO: Trying to get logs from node latest-worker2 pod downward-api-5ab88eb7-0f68-4579-a480-f37ba4a3b539 container dapi-container: STEP: delete the pod Apr 28 01:15:01.722: INFO: Waiting for pod downward-api-5ab88eb7-0f68-4579-a480-f37ba4a3b539 to disappear Apr 28 01:15:01.727: INFO: Pod downward-api-5ab88eb7-0f68-4579-a480-f37ba4a3b539 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:15:01.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8196" for this suite. • [SLOW TEST:8.435 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":269,"skipped":4632,"failed":0} SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:15:01.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 01:15:01.855: INFO: (0) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 4.10912ms) Apr 28 01:15:01.858: INFO: (1) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.583928ms) Apr 28 01:15:01.861: INFO: (2) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.257659ms) Apr 28 01:15:01.864: INFO: (3) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.709883ms) Apr 28 01:15:01.866: INFO: (4) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.501719ms) Apr 28 01:15:01.869: INFO: (5) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.705236ms) Apr 28 01:15:01.871: INFO: (6) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.27755ms) Apr 28 01:15:01.874: INFO: (7) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.309501ms) Apr 28 01:15:01.876: INFO: (8) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.320631ms) Apr 28 01:15:01.879: INFO: (9) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.775936ms) Apr 28 01:15:01.881: INFO: (10) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.499498ms) Apr 28 01:15:01.884: INFO: (11) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.655251ms) Apr 28 01:15:01.887: INFO: (12) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.048689ms) Apr 28 01:15:01.890: INFO: (13) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.071509ms) Apr 28 01:15:01.893: INFO: (14) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.996055ms) Apr 28 01:15:01.896: INFO: (15) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.996353ms) Apr 28 01:15:01.900: INFO: (16) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.395583ms) Apr 28 01:15:01.904: INFO: (17) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.803952ms) Apr 28 01:15:01.907: INFO: (18) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.044415ms) Apr 28 01:15:01.910: INFO: (19) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.228863ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:15:01.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-9610" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":275,"completed":270,"skipped":4637,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:15:01.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating cluster-info Apr 28 01:15:02.088: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config cluster-info' Apr 28 01:15:02.174: INFO: stderr: "" Apr 28 01:15:02.174: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:15:02.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5925" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":275,"completed":271,"skipped":4638,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:15:02.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:15:02.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3213" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":272,"skipped":4662,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:15:02.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-dfbdf3a3-2635-4cc2-9d73-31937c3e40ba STEP: Creating secret with name s-test-opt-upd-6ee8b6b7-d6a7-43d0-be71-ab2a33c8395d STEP: Creating the pod STEP: Deleting secret s-test-opt-del-dfbdf3a3-2635-4cc2-9d73-31937c3e40ba STEP: Updating secret s-test-opt-upd-6ee8b6b7-d6a7-43d0-be71-ab2a33c8395d STEP: Creating secret with name s-test-opt-create-617be92e-480c-4e75-9be5-950c9d607889 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:16:30.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1792" for this suite. • [SLOW TEST:88.649 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":273,"skipped":4693,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:16:30.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-5272a9ee-d97f-404b-a1f1-5c10b2f5cb11 STEP: Creating a pod to test consume secrets Apr 28 01:16:30.997: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-721eff3a-991c-475e-8676-a758ee9caa0b" in namespace "projected-8023" to be "Succeeded or Failed" Apr 28 01:16:31.073: INFO: Pod "pod-projected-secrets-721eff3a-991c-475e-8676-a758ee9caa0b": Phase="Pending", Reason="", readiness=false. Elapsed: 76.173694ms Apr 28 01:16:33.077: INFO: Pod "pod-projected-secrets-721eff3a-991c-475e-8676-a758ee9caa0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079945194s Apr 28 01:16:35.080: INFO: Pod "pod-projected-secrets-721eff3a-991c-475e-8676-a758ee9caa0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.083180761s STEP: Saw pod success Apr 28 01:16:35.080: INFO: Pod "pod-projected-secrets-721eff3a-991c-475e-8676-a758ee9caa0b" satisfied condition "Succeeded or Failed" Apr 28 01:16:35.083: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-721eff3a-991c-475e-8676-a758ee9caa0b container projected-secret-volume-test: STEP: delete the pod Apr 28 01:16:35.153: INFO: Waiting for pod pod-projected-secrets-721eff3a-991c-475e-8676-a758ee9caa0b to disappear Apr 28 01:16:35.275: INFO: Pod pod-projected-secrets-721eff3a-991c-475e-8676-a758ee9caa0b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:16:35.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8023" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":274,"skipped":4710,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 01:16:35.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-35957221-ae9a-468e-a273-da3a934cec99 STEP: Creating a pod to test consume secrets Apr 28 01:16:35.378: INFO: Waiting up to 5m0s for pod "pod-secrets-0cdc7eaf-86f1-4a68-a020-9190ad3c7183" in namespace "secrets-8291" to be "Succeeded or Failed" Apr 28 01:16:35.382: INFO: Pod "pod-secrets-0cdc7eaf-86f1-4a68-a020-9190ad3c7183": Phase="Pending", Reason="", readiness=false. Elapsed: 3.071705ms Apr 28 01:16:37.385: INFO: Pod "pod-secrets-0cdc7eaf-86f1-4a68-a020-9190ad3c7183": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006129003s Apr 28 01:16:39.389: INFO: Pod "pod-secrets-0cdc7eaf-86f1-4a68-a020-9190ad3c7183": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010560662s STEP: Saw pod success Apr 28 01:16:39.389: INFO: Pod "pod-secrets-0cdc7eaf-86f1-4a68-a020-9190ad3c7183" satisfied condition "Succeeded or Failed" Apr 28 01:16:39.392: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-0cdc7eaf-86f1-4a68-a020-9190ad3c7183 container secret-env-test: STEP: delete the pod Apr 28 01:16:39.429: INFO: Waiting for pod pod-secrets-0cdc7eaf-86f1-4a68-a020-9190ad3c7183 to disappear Apr 28 01:16:39.435: INFO: Pod pod-secrets-0cdc7eaf-86f1-4a68-a020-9190ad3c7183 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 01:16:39.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8291" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":275,"skipped":4717,"failed":0} Apr 28 01:16:39.443: INFO: Running AfterSuite actions on all nodes Apr 28 01:16:39.443: INFO: Running AfterSuite actions on node 1 Apr 28 01:16:39.443: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":275,"completed":275,"skipped":4717,"failed":0} Ran 275 of 4992 Specs in 4534.290 seconds SUCCESS! -- 275 Passed | 0 Failed | 0 Pending | 4717 Skipped PASS