I0625 23:39:07.213991 8 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0625 23:39:07.214179 8 e2e.go:129] Starting e2e run "b15b0c1e-dce6-4226-8447-6b2b37a23b07" on Ginkgo node 1 {"msg":"Test Suite starting","total":294,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1593128346 - Will randomize all specs Will run 294 of 5102 specs Jun 25 23:39:07.281: INFO: >>> kubeConfig: /root/.kube/config Jun 25 23:39:07.283: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 25 23:39:07.310: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 25 23:39:07.340: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 25 23:39:07.340: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jun 25 23:39:07.340: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 25 23:39:07.348: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jun 25 23:39:07.348: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 25 23:39:07.348: INFO: e2e test version: v1.19.0-beta.1.98+60b800358f7784 Jun 25 23:39:07.349: INFO: kube-apiserver version: v1.18.2 Jun 25 23:39:07.350: INFO: >>> kubeConfig: /root/.kube/config Jun 25 23:39:07.353: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:39:07.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Jun 25 23:39:07.426: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 25 23:39:07.467: INFO: Waiting up to 5m0s for pod "downwardapi-volume-973ff95e-04c3-44ef-97ba-5669bb8e0978" in namespace "projected-6775" to be "Succeeded or Failed" Jun 25 23:39:07.485: INFO: Pod "downwardapi-volume-973ff95e-04c3-44ef-97ba-5669bb8e0978": Phase="Pending", Reason="", readiness=false. Elapsed: 18.842439ms Jun 25 23:39:09.492: INFO: Pod "downwardapi-volume-973ff95e-04c3-44ef-97ba-5669bb8e0978": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025292223s Jun 25 23:39:11.496: INFO: Pod "downwardapi-volume-973ff95e-04c3-44ef-97ba-5669bb8e0978": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029137709s STEP: Saw pod success Jun 25 23:39:11.496: INFO: Pod "downwardapi-volume-973ff95e-04c3-44ef-97ba-5669bb8e0978" satisfied condition "Succeeded or Failed" Jun 25 23:39:11.499: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-973ff95e-04c3-44ef-97ba-5669bb8e0978 container client-container: STEP: delete the pod Jun 25 23:39:11.546: INFO: Waiting for pod downwardapi-volume-973ff95e-04c3-44ef-97ba-5669bb8e0978 to disappear Jun 25 23:39:11.582: INFO: Pod downwardapi-volume-973ff95e-04c3-44ef-97ba-5669bb8e0978 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:39:11.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6775" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":294,"completed":1,"skipped":14,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:39:11.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-a9e4b7f3-91d5-4bae-bf00-3a5fd060d410 STEP: Creating a pod to test consume configMaps Jun 25 23:39:11.663: INFO: Waiting up to 5m0s for pod "pod-configmaps-18c0c59c-6ecb-4492-87f8-6760fa113f3f" in namespace "configmap-2974" to be "Succeeded or Failed" Jun 25 23:39:11.720: INFO: Pod "pod-configmaps-18c0c59c-6ecb-4492-87f8-6760fa113f3f": Phase="Pending", Reason="", readiness=false. Elapsed: 56.611416ms Jun 25 23:39:13.737: INFO: Pod "pod-configmaps-18c0c59c-6ecb-4492-87f8-6760fa113f3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073999988s Jun 25 23:39:15.742: INFO: Pod "pod-configmaps-18c0c59c-6ecb-4492-87f8-6760fa113f3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.078422444s STEP: Saw pod success Jun 25 23:39:15.742: INFO: Pod "pod-configmaps-18c0c59c-6ecb-4492-87f8-6760fa113f3f" satisfied condition "Succeeded or Failed" Jun 25 23:39:15.747: INFO: Trying to get logs from node latest-worker pod pod-configmaps-18c0c59c-6ecb-4492-87f8-6760fa113f3f container configmap-volume-test: STEP: delete the pod Jun 25 23:39:15.799: INFO: Waiting for pod pod-configmaps-18c0c59c-6ecb-4492-87f8-6760fa113f3f to disappear Jun 25 23:39:15.818: INFO: Pod pod-configmaps-18c0c59c-6ecb-4492-87f8-6760fa113f3f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:39:15.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2974" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":294,"completed":2,"skipped":18,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:39:15.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:39:31.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2928" for this suite. STEP: Destroying namespace "nsdeletetest-1100" for this suite. Jun 25 23:39:31.182: INFO: Namespace nsdeletetest-1100 was already deleted STEP: Destroying namespace "nsdeletetest-1496" for this suite. • [SLOW TEST:15.362 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":294,"completed":3,"skipped":47,"failed":0} SS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:39:31.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-5728/configmap-test-58a565f2-d169-4786-9bba-888abcb2d3e6 STEP: Creating a pod to test consume configMaps Jun 25 23:39:31.274: INFO: Waiting up to 5m0s for pod "pod-configmaps-d35c9ecc-e1d2-4dd7-bb9c-5a468a7f50ce" in namespace "configmap-5728" to be "Succeeded or Failed" Jun 25 23:39:31.279: INFO: Pod "pod-configmaps-d35c9ecc-e1d2-4dd7-bb9c-5a468a7f50ce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.570663ms Jun 25 23:39:33.391: INFO: Pod "pod-configmaps-d35c9ecc-e1d2-4dd7-bb9c-5a468a7f50ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116308156s Jun 25 23:39:35.528: INFO: Pod "pod-configmaps-d35c9ecc-e1d2-4dd7-bb9c-5a468a7f50ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.253798068s STEP: Saw pod success Jun 25 23:39:35.528: INFO: Pod "pod-configmaps-d35c9ecc-e1d2-4dd7-bb9c-5a468a7f50ce" satisfied condition "Succeeded or Failed" Jun 25 23:39:35.532: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-d35c9ecc-e1d2-4dd7-bb9c-5a468a7f50ce container env-test: STEP: delete the pod Jun 25 23:39:35.562: INFO: Waiting for pod pod-configmaps-d35c9ecc-e1d2-4dd7-bb9c-5a468a7f50ce to disappear Jun 25 23:39:35.578: INFO: Pod pod-configmaps-d35c9ecc-e1d2-4dd7-bb9c-5a468a7f50ce no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:39:35.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5728" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":294,"completed":4,"skipped":49,"failed":0} SSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:39:35.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 25 23:39:35.685: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:39:39.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5873" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":294,"completed":5,"skipped":53,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:39:39.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-de1d5264-34f1-44a8-a524-eeecd2b3dbe6 STEP: Creating secret with name s-test-opt-upd-ec3fd25f-7e3d-4e9a-9e7e-0d38b2fd6d97 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-de1d5264-34f1-44a8-a524-eeecd2b3dbe6 STEP: Updating secret s-test-opt-upd-ec3fd25f-7e3d-4e9a-9e7e-0d38b2fd6d97 STEP: Creating secret with name s-test-opt-create-c00be8e5-0ce0-4798-bf62-28fc00c877f2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:39:50.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9289" for this suite. • [SLOW TEST:10.297 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":294,"completed":6,"skipped":76,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:39:50.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Jun 25 23:39:50.128: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:39:56.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6576" for this suite. • [SLOW TEST:6.255 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":294,"completed":7,"skipped":88,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:39:56.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:809 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:39:56.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-882" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:813 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":294,"completed":8,"skipped":110,"failed":0} SSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:39:56.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-4cac9541-5532-4cef-85a9-f532190ccfcc Jun 25 23:39:57.020: INFO: Pod name my-hostname-basic-4cac9541-5532-4cef-85a9-f532190ccfcc: Found 0 pods out of 1 Jun 25 23:40:02.024: INFO: Pod name my-hostname-basic-4cac9541-5532-4cef-85a9-f532190ccfcc: Found 1 pods out of 1 Jun 25 23:40:02.024: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-4cac9541-5532-4cef-85a9-f532190ccfcc" are running Jun 25 23:40:02.033: INFO: Pod "my-hostname-basic-4cac9541-5532-4cef-85a9-f532190ccfcc-khtj6" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-25 23:39:57 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-25 23:40:00 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-25 23:40:00 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-25 23:39:57 +0000 UTC Reason: Message:}]) Jun 25 23:40:02.033: INFO: Trying to dial the pod Jun 25 23:40:07.052: INFO: Controller my-hostname-basic-4cac9541-5532-4cef-85a9-f532190ccfcc: Got expected result from replica 1 [my-hostname-basic-4cac9541-5532-4cef-85a9-f532190ccfcc-khtj6]: "my-hostname-basic-4cac9541-5532-4cef-85a9-f532190ccfcc-khtj6", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:40:07.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1936" for this suite. • [SLOW TEST:10.225 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":294,"completed":9,"skipped":113,"failed":0} SS ------------------------------ [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:40:07.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis Jun 25 23:40:07.415: FAIL: expected certificates API group/version, got []v1.APIGroup{v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"v1", Version:"v1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"apiregistration.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"apiregistration.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"apiregistration.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"apiregistration.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"extensions", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"extensions/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"extensions/v1beta1", Version:"v1beta1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"apps", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"apps/v1", Version:"v1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"apps/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"events.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"events.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"events.k8s.io/v1beta1", Version:"v1beta1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"authentication.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"authentication.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"authentication.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"authentication.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"authorization.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"authorization.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"authorization.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"authorization.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"autoscaling", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"autoscaling/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"autoscaling/v2beta1", Version:"v2beta1"}, v1.GroupVersionForDiscovery{GroupVersion:"autoscaling/v2beta2", Version:"v2beta2"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"autoscaling/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"batch", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"batch/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"batch/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"batch/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"certificates.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"certificates.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"certificates.k8s.io/v1beta1", Version:"v1beta1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"networking.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"networking.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"networking.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"networking.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"policy", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"policy/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"policy/v1beta1", Version:"v1beta1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"rbac.authorization.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"rbac.authorization.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"rbac.authorization.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"rbac.authorization.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"storage.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"storage.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"storage.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"storage.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"admissionregistration.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"admissionregistration.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"admissionregistration.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"admissionregistration.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"apiextensions.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"apiextensions.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"apiextensions.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"apiextensions.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"scheduling.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"scheduling.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"scheduling.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"scheduling.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"coordination.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"coordination.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"coordination.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"coordination.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"node.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"node.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"node.k8s.io/v1beta1", Version:"v1beta1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"discovery.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"discovery.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"discovery.k8s.io/v1beta1", Version:"v1beta1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}} Expected : false to equal : true Full Stack Trace k8s.io/kubernetes/test/e2e/auth.glob..func2.2() /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/certificates.go:231 +0x7ce k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00280c600) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x360 k8s.io/kubernetes/test/e2e.TestE2E(0xc00280c600) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:141 +0x2b testing.tRunner(0xc00280c600, 0x4e37068) /usr/local/go/src/testing/testing.go:909 +0xc9 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:960 +0x350 [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "certificates-772". STEP: Found 0 events. Jun 25 23:40:07.419: INFO: POD NODE PHASE GRACE CONDITIONS Jun 25 23:40:07.419: INFO: Jun 25 23:40:07.421: INFO: Logging node info for node latest-control-plane Jun 25 23:40:07.423: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane /api/v1/nodes/latest-control-plane b7c23ecc-1548-479e-83f7-eb5444fbe13d 15901321 0 2020-04-29 09:53:29 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2020-04-29 09:53:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2020-04-29 09:54:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2020-06-25 23:37:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-06-25 23:37:43 +0000 UTC,LastTransitionTime:2020-04-29 09:53:26 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-06-25 23:37:43 +0000 UTC,LastTransitionTime:2020-04-29 09:53:26 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-06-25 23:37:43 +0000 UTC,LastTransitionTime:2020-04-29 09:53:26 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-06-25 23:37:43 +0000 UTC,LastTransitionTime:2020-04-29 09:54:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.11,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3939cf129c9d4d6e85e611ab996d9137,SystemUUID:2573ae1d-4849-412e-9a34-432f95556990,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.18.2,KubeProxyVersion:v1.18.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.18.2],SizeBytes:146648881,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.18.2],SizeBytes:132860030,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.18.2],SizeBytes:132826433,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.18.2],SizeBytes:113095985,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.7],SizeBytes:43921887,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 25 23:40:07.423: INFO: Logging kubelet events for node latest-control-plane Jun 25 23:40:07.425: INFO: Logging pods the kubelet thinks is on node latest-control-plane Jun 25 23:40:07.441: INFO: etcd-latest-control-plane started at 2020-04-29 09:53:36 +0000 UTC (0+1 container statuses recorded) Jun 25 23:40:07.441: INFO: Container etcd ready: true, restart count 4 Jun 25 23:40:07.441: INFO: kube-apiserver-latest-control-plane started at 2020-04-29 09:53:36 +0000 UTC (0+1 container statuses recorded) Jun 25 23:40:07.441: INFO: Container kube-apiserver ready: true, restart count 2 Jun 25 23:40:07.441: INFO: kindnet-8x7pf started at 2020-04-29 09:53:53 +0000 UTC (0+1 container statuses recorded) Jun 25 23:40:07.441: INFO: Container kindnet-cni ready: true, restart count 4 Jun 25 23:40:07.441: INFO: coredns-66bff467f8-8n5vh started at 2020-04-29 09:54:06 +0000 UTC (0+1 container statuses recorded) Jun 25 23:40:07.441: INFO: Container coredns ready: true, restart count 0 Jun 25 23:40:07.441: INFO: local-path-provisioner-bd4bb6b75-bmf2h started at 2020-04-29 09:54:06 +0000 UTC (0+1 container statuses recorded) Jun 25 23:40:07.441: INFO: Container local-path-provisioner ready: true, restart count 87 Jun 25 23:40:07.441: INFO: kube-scheduler-latest-control-plane started at 2020-04-29 09:53:36 +0000 UTC (0+1 container statuses recorded) Jun 25 23:40:07.441: INFO: Container kube-scheduler ready: true, restart count 115 Jun 25 23:40:07.441: INFO: kube-controller-manager-latest-control-plane started at 2020-04-29 09:53:36 +0000 UTC (0+1 container statuses recorded) Jun 25 23:40:07.441: INFO: Container kube-controller-manager ready: true, restart count 119 Jun 25 23:40:07.441: INFO: kube-proxy-h8mhz started at 2020-04-29 09:53:54 +0000 UTC (0+1 container statuses recorded) Jun 25 23:40:07.441: INFO: Container kube-proxy ready: true, restart count 0 Jun 25 23:40:07.441: INFO: coredns-66bff467f8-qr7l5 started at 2020-04-29 09:54:10 +0000 UTC (0+1 container statuses recorded) Jun 25 23:40:07.441: INFO: Container coredns ready: true, restart count 0 W0625 23:40:07.444774 8 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 25 23:40:07.517: INFO: Latency metrics for node latest-control-plane Jun 25 23:40:07.517: INFO: Logging node info for node latest-worker Jun 25 23:40:07.520: INFO: Node Info: &Node{ObjectMeta:{latest-worker /api/v1/nodes/latest-worker 2f09bb79-b24c-46f4-8a0d-ace124db698c 15901002 0 2020-04-29 09:54:07 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2020-04-29 09:54:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2020-04-29 09:54:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2020-06-25 23:36:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-06-25 23:36:17 +0000 UTC,LastTransitionTime:2020-04-29 09:54:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-06-25 23:36:17 +0000 UTC,LastTransitionTime:2020-04-29 09:54:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-06-25 23:36:17 +0000 UTC,LastTransitionTime:2020-04-29 09:54:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-06-25 23:36:17 +0000 UTC,LastTransitionTime:2020-04-29 09:54:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.13,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:83dc4a3bd84a4693999c93a6c8c5f678,SystemUUID:66e94596-e77d-487e-8e4a-bc652b040cea,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.18.2,KubeProxyVersion:v1.18.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:07e93f55decdc1224fb8d161edb5617d58e3488c1250168337548ccc3e82f6b7 docker.io/ollivier/clearwater-cassandra:latest],SizeBytes:386164043,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:141a336f17eaf068dbe8da4b01a832033aed5c09e7fa6349ec091ee30b76c9b1 docker.io/ollivier/clearwater-homestead-prov:latest],SizeBytes:360403156,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:8c84761d2d906e344bc6a85a11451d35696cf684305555611df16ce2615ac816 docker.io/ollivier/clearwater-ellis:latest],SizeBytes:351094667,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:19c6d11d2678c44822f07c01c574fed426e3c99003b6af0410f0911d57939d5a docker.io/ollivier/clearwater-homer:latest],SizeBytes:343984685,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:f365f3b72267bef0fd696e4a93c0f3c19fb65ad42a8850fe22873dbadd03fdba docker.io/ollivier/clearwater-astaire:latest],SizeBytes:326777758,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:eb98596100b1553c9814b6185863ec53e743eb0370faeeafe16fc1dfe8d02ec3 docker.io/ollivier/clearwater-bono:latest],SizeBytes:303283801,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:44590682de48854faeccc1f4c7de39cb666014a0c4e3abd93adcccad3208a6e2 docker.io/ollivier/clearwater-sprout:latest],SizeBytes:298307172,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:0b3c89ab451b09e347657d5f85ed99d47ec3e8689b98916af72b23576926b08d docker.io/ollivier/clearwater-homestead:latest],SizeBytes:294847386,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:20069a8d9f366dd0f003afa7c4fbcbcd5e9d2b99abae83540c6538fc7cff6b97 docker.io/ollivier/clearwater-ralf:latest],SizeBytes:287124270,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:8ddcfa68c82ebf0b4ce6add019a8f57c024aec453f47a37017cf7dff8680268a docker.io/ollivier/clearwater-chronos:latest],SizeBytes:285184449,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.18.2],SizeBytes:146648881,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.18.2],SizeBytes:132860030,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.18.2],SizeBytes:132826433,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:9e6d47f5fb42621781fac92b9f8f86a7e00596fd5c022472a51d33b8c6638b85 docker.io/aquasec/kube-hunter:latest],SizeBytes:126124611,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:5a7b70d343cfaeff79f6e6a8f473983a5eb7ca52f723aa8aa226aad4ee5b96e3],SizeBytes:125323634,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:795d89480038d62363491066edd962a3f0042c338d4d9feb3f4db23ac659fb40],SizeBytes:124499152,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.18.2],SizeBytes:113095985,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12f377200949c25fde1e54bba639d34d119edd7cfcfb1d117526dba677c03c85 k8s.gcr.io/etcd:3.4.7],SizeBytes:104221097,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:c2efaddff058c146b93517d06a3a8066b6e88fecdd98fa6847cb69db22555f04 docker.io/ollivier/clearwater-live-test:latest],SizeBytes:46948523,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9 us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13],SizeBytes:45704260,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.7],SizeBytes:43921887,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:1e2b01ec091289327cd7e1b527c11b95db710ace489c9bd665c0d771c0225729],SizeBytes:8039938,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:9d86125c0409a16346857dbda530cf29583c87f186281745f539c12e3dcd38a7],SizeBytes:8039918,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:c42be6eafdbe71363ad6a2035fe53f12dbe36aab19a1a3c015231e97cd11d986],SizeBytes:8039911,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:bdfc3a8aeed63e545ab0df01806707219ffb785bca75e08cbee043075dedfb3c],SizeBytes:8039898,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:a3fe72ad3946d830134b92e5c922a92d4aeb594f0445d178f9e2d610b1be04b5],SizeBytes:8039861,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:6da1996cf654bbc10175028832d6ffb92720272d0deca971bb296ea9092f4273],SizeBytes:8039845,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:cab37ac2de78ddbc6655eddae38239ebafdf79a7934bc53361e1524a2ed5ab56 docker.io/aquasec/kube-bench:latest],SizeBytes:8035885,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:ee55386ef35bea93a3a0900fd714038bebd156e0448addf839f38093dbbaace9],SizeBytes:8029111,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:a8cf7ff6367c2afa2a90acd081b484cbded349a7076e7bdf37a05279f276bc12],SizeBytes:764955,},ContainerImage{Names:[docker.io/library/busybox@sha256:836945da1f3afe2cfff376d379852bbb82e0237cb2925d53a13f53d6e8a8c48c docker.io/library/busybox@sha256:52cfc475afdd697afd2dbe1a3761c8001bf3ba39f76819c922128c088869d339],SizeBytes:764948,},ContainerImage{Names:[docker.io/library/busybox@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209 docker.io/library/busybox:latest],SizeBytes:764556,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 25 23:40:07.522: INFO: Logging kubelet events for node latest-worker Jun 25 23:40:07.524: INFO: Logging pods the kubelet thinks is on node latest-worker Jun 25 23:40:07.531: INFO: rally-c184502e-30nwopzm-7fmqm started at 2020-05-11 08:48:29 +0000 UTC (0+1 container statuses recorded) Jun 25 23:40:07.531: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 Jun 25 23:40:07.531: INFO: kube-proxy-c8n27 started at 2020-04-29 09:54:13 +0000 UTC (0+1 container statuses recorded) Jun 25 23:40:07.531: INFO: Container kube-proxy ready: true, restart count 0 Jun 25 23:40:07.531: INFO: kindnet-hg2tf started at 2020-04-29 09:54:13 +0000 UTC (0+1 container statuses recorded) Jun 25 23:40:07.531: INFO: Container kindnet-cni ready: true, restart count 5 Jun 25 23:40:07.531: INFO: rally-c184502e-30nwopzm started at 2020-05-11 08:48:25 +0000 UTC (0+1 container statuses recorded) Jun 25 23:40:07.531: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 W0625 23:40:07.534580 8 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 25 23:40:07.581: INFO: Latency metrics for node latest-worker Jun 25 23:40:07.581: INFO: Logging node info for node latest-worker2 Jun 25 23:40:07.594: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 /api/v1/nodes/latest-worker2 edb8c16e-16f9-40fa-97b0-84ba80a01b1f 15900806 0 2020-04-29 09:54:08 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2020-04-29 09:54:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2020-04-29 09:54:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2020-06-25 23:35:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-06-25 23:35:43 +0000 UTC,LastTransitionTime:2020-04-29 09:54:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-06-25 23:35:43 +0000 UTC,LastTransitionTime:2020-04-29 09:54:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-06-25 23:35:43 +0000 UTC,LastTransitionTime:2020-04-29 09:54:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-06-25 23:35:43 +0000 UTC,LastTransitionTime:2020-04-29 09:54:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.12,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a92a0b35db3a4f1fb7e74bf96e498c99,SystemUUID:8fa82d10-b80f-4f70-a9ff-665f94ff4ecc,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.18.2,KubeProxyVersion:v1.18.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:07e93f55decdc1224fb8d161edb5617d58e3488c1250168337548ccc3e82f6b7 docker.io/ollivier/clearwater-cassandra:latest],SizeBytes:386164043,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:141a336f17eaf068dbe8da4b01a832033aed5c09e7fa6349ec091ee30b76c9b1 docker.io/ollivier/clearwater-homestead-prov:latest],SizeBytes:360403156,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:8c84761d2d906e344bc6a85a11451d35696cf684305555611df16ce2615ac816 docker.io/ollivier/clearwater-ellis:latest],SizeBytes:351094667,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:19c6d11d2678c44822f07c01c574fed426e3c99003b6af0410f0911d57939d5a docker.io/ollivier/clearwater-homer:latest],SizeBytes:343984685,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:f365f3b72267bef0fd696e4a93c0f3c19fb65ad42a8850fe22873dbadd03fdba docker.io/ollivier/clearwater-astaire:latest],SizeBytes:326777758,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:eb98596100b1553c9814b6185863ec53e743eb0370faeeafe16fc1dfe8d02ec3 docker.io/ollivier/clearwater-bono:latest],SizeBytes:303283801,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:44590682de48854faeccc1f4c7de39cb666014a0c4e3abd93adcccad3208a6e2 docker.io/ollivier/clearwater-sprout:latest],SizeBytes:298307172,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:0b3c89ab451b09e347657d5f85ed99d47ec3e8689b98916af72b23576926b08d docker.io/ollivier/clearwater-homestead:latest],SizeBytes:294847386,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:20069a8d9f366dd0f003afa7c4fbcbcd5e9d2b99abae83540c6538fc7cff6b97 docker.io/ollivier/clearwater-ralf:latest],SizeBytes:287124270,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:8ddcfa68c82ebf0b4ce6add019a8f57c024aec453f47a37017cf7dff8680268a docker.io/ollivier/clearwater-chronos:latest],SizeBytes:285184449,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.18.2],SizeBytes:146648881,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.18.2],SizeBytes:132860030,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.18.2],SizeBytes:132826433,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:d0af3efaa83cf2106879b7fd3972faaee44a0d4a82db97b27f33f8c71aa450b3 docker.io/aquasec/kube-hunter:latest],SizeBytes:127384616,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:9e6d47f5fb42621781fac92b9f8f86a7e00596fd5c022472a51d33b8c6638b85],SizeBytes:126124611,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:5a7b70d343cfaeff79f6e6a8f473983a5eb7ca52f723aa8aa226aad4ee5b96e3],SizeBytes:125323634,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:795d89480038d62363491066edd962a3f0042c338d4d9feb3f4db23ac659fb40],SizeBytes:124499152,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.18.2],SizeBytes:113095985,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12f377200949c25fde1e54bba639d34d119edd7cfcfb1d117526dba677c03c85 k8s.gcr.io/etcd:3.4.7],SizeBytes:104221097,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:c2efaddff058c146b93517d06a3a8066b6e88fecdd98fa6847cb69db22555f04 docker.io/ollivier/clearwater-live-test:latest],SizeBytes:46948523,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9 us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13],SizeBytes:45704260,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.7],SizeBytes:43921887,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:1e2b01ec091289327cd7e1b527c11b95db710ace489c9bd665c0d771c0225729],SizeBytes:8039938,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:9d86125c0409a16346857dbda530cf29583c87f186281745f539c12e3dcd38a7],SizeBytes:8039918,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:bdfc3a8aeed63e545ab0df01806707219ffb785bca75e08cbee043075dedfb3c],SizeBytes:8039898,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:a3fe72ad3946d830134b92e5c922a92d4aeb594f0445d178f9e2d610b1be04b5 docker.io/aquasec/kube-bench:latest],SizeBytes:8039861,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:ee55386ef35bea93a3a0900fd714038bebd156e0448addf839f38093dbbaace9],SizeBytes:8029111,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:a8cf7ff6367c2afa2a90acd081b484cbded349a7076e7bdf37a05279f276bc12],SizeBytes:764955,},ContainerImage{Names:[docker.io/library/busybox@sha256:52cfc475afdd697afd2dbe1a3761c8001bf3ba39f76819c922128c088869d339 docker.io/library/busybox@sha256:836945da1f3afe2cfff376d379852bbb82e0237cb2925d53a13f53d6e8a8c48c],SizeBytes:764948,},ContainerImage{Names:[docker.io/library/busybox@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209 docker.io/library/busybox:latest],SizeBytes:764556,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 25 23:40:07.595: INFO: Logging kubelet events for node latest-worker2 Jun 25 23:40:07.609: INFO: Logging pods the kubelet thinks is on node latest-worker2 Jun 25 23:40:07.616: INFO: my-hostname-basic-4cac9541-5532-4cef-85a9-f532190ccfcc-khtj6 started at 2020-06-25 23:39:57 +0000 UTC (0+1 container statuses recorded) Jun 25 23:40:07.616: INFO: Container my-hostname-basic-4cac9541-5532-4cef-85a9-f532190ccfcc ready: true, restart count 0 Jun 25 23:40:07.616: INFO: kindnet-jl4dn started at 2020-04-29 09:54:11 +0000 UTC (0+1 container statuses recorded) Jun 25 23:40:07.616: INFO: Container kindnet-cni ready: true, restart count 5 Jun 25 23:40:07.616: INFO: kube-proxy-pcmmp started at 2020-04-29 09:54:11 +0000 UTC (0+1 container statuses recorded) Jun 25 23:40:07.616: INFO: Container kube-proxy ready: true, restart count 0 Jun 25 23:40:07.616: INFO: rally-c184502e-ept97j69-6xvbj started at 2020-05-11 08:48:03 +0000 UTC (0+1 container statuses recorded) Jun 25 23:40:07.616: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 Jun 25 23:40:07.616: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 started at 2020-05-12 09:11:35 +0000 UTC (0+1 container statuses recorded) Jun 25 23:40:07.616: INFO: Container terminate-cmd-rpa ready: true, restart count 2 Jun 25 23:40:07.616: INFO: pod-logs-websocket-8fc076e3-e858-4679-bd66-869d5f55297f started at 2020-06-25 23:39:35 +0000 UTC (0+1 container statuses recorded) Jun 25 23:40:07.616: INFO: Container main ready: true, restart count 0 W0625 23:40:07.620412 8 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 25 23:40:07.676: INFO: Latency metrics for node latest-worker2 Jun 25 23:40:07.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-772" for this suite. • Failure [0.623 seconds] [sig-auth] Certificates API [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should support CSR API operations [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 25 23:40:07.415: expected certificates API group/version, got []v1.APIGroup{v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"v1", Version:"v1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"apiregistration.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"apiregistration.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"apiregistration.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"apiregistration.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"extensions", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"extensions/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"extensions/v1beta1", Version:"v1beta1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"apps", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"apps/v1", Version:"v1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"apps/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"events.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"events.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"events.k8s.io/v1beta1", Version:"v1beta1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"authentication.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"authentication.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"authentication.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"authentication.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"authorization.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"authorization.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"authorization.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"authorization.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"autoscaling", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"autoscaling/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"autoscaling/v2beta1", Version:"v2beta1"}, v1.GroupVersionForDiscovery{GroupVersion:"autoscaling/v2beta2", Version:"v2beta2"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"autoscaling/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"batch", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"batch/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"batch/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"batch/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"certificates.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"certificates.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"certificates.k8s.io/v1beta1", Version:"v1beta1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"networking.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"networking.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"networking.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"networking.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"policy", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"policy/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"policy/v1beta1", Version:"v1beta1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"rbac.authorization.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"rbac.authorization.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"rbac.authorization.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"rbac.authorization.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"storage.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"storage.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"storage.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"storage.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"admissionregistration.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"admissionregistration.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"admissionregistration.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"admissionregistration.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"apiextensions.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"apiextensions.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"apiextensions.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"apiextensions.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"scheduling.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"scheduling.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"scheduling.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"scheduling.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"coordination.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"coordination.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"coordination.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"coordination.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"node.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"node.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"node.k8s.io/v1beta1", Version:"v1beta1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"discovery.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"discovery.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"discovery.k8s.io/v1beta1", Version:"v1beta1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}} Expected : false to equal : true /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/certificates.go:231 ------------------------------ {"msg":"FAILED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":294,"completed":9,"skipped":115,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:40:07.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 25 23:40:07.810: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:40:08.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2610" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":294,"completed":10,"skipped":125,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:40:08.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 25 23:40:08.910: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:40:15.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9958" for this suite. • [SLOW TEST:7.126 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":294,"completed":11,"skipped":128,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:40:15.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 25 23:40:16.569: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 25 23:40:18.732: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728725216, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728725216, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728725216, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728725216, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 25 23:40:20.736: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728725216, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728725216, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728725216, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728725216, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 25 23:40:23.765: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:40:23.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5657" for this suite. STEP: Destroying namespace "webhook-5657-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.988 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":294,"completed":12,"skipped":134,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:40:23.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-af04e75e-7ae1-46fd-8918-be15219f2bcc STEP: Creating configMap with name cm-test-opt-upd-2d62ae67-6020-4447-b14a-e920e069f616 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-af04e75e-7ae1-46fd-8918-be15219f2bcc STEP: Updating configmap cm-test-opt-upd-2d62ae67-6020-4447-b14a-e920e069f616 STEP: Creating configMap with name cm-test-opt-create-612ce2a8-a814-4189-87f3-4924adf7a6ae STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:40:32.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1606" for this suite. • [SLOW TEST:8.294 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":294,"completed":13,"skipped":160,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:40:32.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 25 23:40:32.760: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 25 23:40:34.780: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728725232, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728725232, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728725232, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728725232, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 25 23:40:37.847: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Jun 25 23:40:37.900: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:40:38.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6629" for this suite. STEP: Destroying namespace "webhook-6629-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.368 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":294,"completed":14,"skipped":187,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:40:38.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Jun 25 23:40:43.535: INFO: Successfully updated pod "annotationupdate0c3bfe2f-02f0-4f0d-90c3-371ddcf55c52" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:40:47.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1069" for this suite. • [SLOW TEST:8.934 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":294,"completed":15,"skipped":215,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:40:47.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Jun 25 23:40:47.642: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Jun 25 23:41:00.753: INFO: >>> kubeConfig: /root/.kube/config Jun 25 23:41:03.768: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:41:14.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3124" for this suite. • [SLOW TEST:26.873 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":294,"completed":16,"skipped":219,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:41:14.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1316 STEP: creating the pod Jun 25 23:41:14.505: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4' Jun 25 23:41:18.012: INFO: stderr: "" Jun 25 23:41:18.012: INFO: stdout: "pod/pause created\n" Jun 25 23:41:18.013: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jun 25 23:41:18.013: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-4" to be "running and ready" Jun 25 23:41:18.045: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 32.021238ms Jun 25 23:41:20.068: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055401586s Jun 25 23:41:22.072: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.059542073s Jun 25 23:41:22.072: INFO: Pod "pause" satisfied condition "running and ready" Jun 25 23:41:22.072: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod Jun 25 23:41:22.073: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-4' Jun 25 23:41:22.189: INFO: stderr: "" Jun 25 23:41:22.189: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jun 25 23:41:22.189: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4' Jun 25 23:41:22.297: INFO: stderr: "" Jun 25 23:41:22.297: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Jun 25 23:41:22.297: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-4' Jun 25 23:41:22.420: INFO: stderr: "" Jun 25 23:41:22.420: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jun 25 23:41:22.420: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4' Jun 25 23:41:22.525: INFO: stderr: "" Jun 25 23:41:22.525: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1323 STEP: using delete to clean up resources Jun 25 23:41:22.525: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4' Jun 25 23:41:22.641: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 25 23:41:22.641: INFO: stdout: "pod \"pause\" force deleted\n" Jun 25 23:41:22.641: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-4' Jun 25 23:41:22.752: INFO: stderr: "No resources found in kubectl-4 namespace.\n" Jun 25 23:41:22.752: INFO: stdout: "" Jun 25 23:41:22.752: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-4 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 25 23:41:22.848: INFO: stderr: "" Jun 25 23:41:22.848: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:41:22.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4" for this suite. • [SLOW TEST:8.401 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1313 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":294,"completed":17,"skipped":223,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:41:22.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 25 23:41:23.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Jun 25 23:41:23.703: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-25T23:41:23Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-06-25T23:41:23Z]] name:name1 resourceVersion:15902618 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:762400a6-4bcc-4405-8c56-e16e24a526fa] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Jun 25 23:41:33.709: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-25T23:41:33Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-06-25T23:41:33Z]] name:name2 resourceVersion:15902658 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:22111724-81e5-4a37-8b4a-3602f47c4815] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Jun 25 23:41:43.747: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-25T23:41:23Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-06-25T23:41:43Z]] name:name1 resourceVersion:15902686 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:762400a6-4bcc-4405-8c56-e16e24a526fa] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Jun 25 23:41:53.753: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-25T23:41:33Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-06-25T23:41:53Z]] name:name2 resourceVersion:15902716 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:22111724-81e5-4a37-8b4a-3602f47c4815] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Jun 25 23:42:03.768: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-25T23:41:23Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-06-25T23:41:43Z]] name:name1 resourceVersion:15902746 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:762400a6-4bcc-4405-8c56-e16e24a526fa] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Jun 25 23:42:13.778: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-25T23:41:33Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-06-25T23:41:53Z]] name:name2 resourceVersion:15902775 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:22111724-81e5-4a37-8b4a-3602f47c4815] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:42:24.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-7444" for this suite. • [SLOW TEST:61.443 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":294,"completed":18,"skipped":235,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:42:24.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Jun 25 23:42:24.400: INFO: Waiting up to 5m0s for pod "downward-api-8aa38b0d-7ce1-47b4-8c0a-f4b669c71c35" in namespace "downward-api-2493" to be "Succeeded or Failed" Jun 25 23:42:24.440: INFO: Pod "downward-api-8aa38b0d-7ce1-47b4-8c0a-f4b669c71c35": Phase="Pending", Reason="", readiness=false. Elapsed: 40.513832ms Jun 25 23:42:26.445: INFO: Pod "downward-api-8aa38b0d-7ce1-47b4-8c0a-f4b669c71c35": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044575502s Jun 25 23:42:28.453: INFO: Pod "downward-api-8aa38b0d-7ce1-47b4-8c0a-f4b669c71c35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053369919s STEP: Saw pod success Jun 25 23:42:28.453: INFO: Pod "downward-api-8aa38b0d-7ce1-47b4-8c0a-f4b669c71c35" satisfied condition "Succeeded or Failed" Jun 25 23:42:28.458: INFO: Trying to get logs from node latest-worker pod downward-api-8aa38b0d-7ce1-47b4-8c0a-f4b669c71c35 container dapi-container: STEP: delete the pod Jun 25 23:42:28.502: INFO: Waiting for pod downward-api-8aa38b0d-7ce1-47b4-8c0a-f4b669c71c35 to disappear Jun 25 23:42:28.544: INFO: Pod downward-api-8aa38b0d-7ce1-47b4-8c0a-f4b669c71c35 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:42:28.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2493" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":294,"completed":19,"skipped":243,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:42:28.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-b401089d-2475-4f05-a6b6-48e6ffd719a7 STEP: Creating a pod to test consume configMaps Jun 25 23:42:28.655: INFO: Waiting up to 5m0s for pod "pod-configmaps-a7d204dd-a660-4266-897c-2fd4e9022e15" in namespace "configmap-740" to be "Succeeded or Failed" Jun 25 23:42:28.658: INFO: Pod "pod-configmaps-a7d204dd-a660-4266-897c-2fd4e9022e15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.821401ms Jun 25 23:42:30.692: INFO: Pod "pod-configmaps-a7d204dd-a660-4266-897c-2fd4e9022e15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036241372s Jun 25 23:42:32.696: INFO: Pod "pod-configmaps-a7d204dd-a660-4266-897c-2fd4e9022e15": Phase="Running", Reason="", readiness=true. Elapsed: 4.040651678s Jun 25 23:42:34.700: INFO: Pod "pod-configmaps-a7d204dd-a660-4266-897c-2fd4e9022e15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.045071176s STEP: Saw pod success Jun 25 23:42:34.701: INFO: Pod "pod-configmaps-a7d204dd-a660-4266-897c-2fd4e9022e15" satisfied condition "Succeeded or Failed" Jun 25 23:42:34.704: INFO: Trying to get logs from node latest-worker pod pod-configmaps-a7d204dd-a660-4266-897c-2fd4e9022e15 container configmap-volume-test: STEP: delete the pod Jun 25 23:42:34.739: INFO: Waiting for pod pod-configmaps-a7d204dd-a660-4266-897c-2fd4e9022e15 to disappear Jun 25 23:42:34.746: INFO: Pod pod-configmaps-a7d204dd-a660-4266-897c-2fd4e9022e15 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:42:34.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-740" for this suite. • [SLOW TEST:6.198 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":20,"skipped":249,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:42:34.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 25 23:42:34.877: INFO: Waiting up to 5m0s for pod "busybox-user-65534-b31b0c5c-1ba2-4771-b07c-53601d09e244" in namespace "security-context-test-9034" to be "Succeeded or Failed" Jun 25 23:42:34.890: INFO: Pod "busybox-user-65534-b31b0c5c-1ba2-4771-b07c-53601d09e244": Phase="Pending", Reason="", readiness=false. Elapsed: 12.990469ms Jun 25 23:42:36.964: INFO: Pod "busybox-user-65534-b31b0c5c-1ba2-4771-b07c-53601d09e244": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087016079s Jun 25 23:42:38.968: INFO: Pod "busybox-user-65534-b31b0c5c-1ba2-4771-b07c-53601d09e244": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.091483509s Jun 25 23:42:38.968: INFO: Pod "busybox-user-65534-b31b0c5c-1ba2-4771-b07c-53601d09e244" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:42:38.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9034" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":21,"skipped":286,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:42:38.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Jun 25 23:42:39.200: INFO: >>> kubeConfig: /root/.kube/config Jun 25 23:42:42.152: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:42:53.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2076" for this suite. • [SLOW TEST:14.438 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":294,"completed":22,"skipped":288,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:42:53.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:42:53.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8768" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":294,"completed":23,"skipped":306,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:42:53.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 25 23:42:53.784: INFO: Waiting up to 5m0s for pod "downwardapi-volume-886aa76e-9802-4386-b63c-3519cae54c76" in namespace "downward-api-690" to be "Succeeded or Failed" Jun 25 23:42:53.795: INFO: Pod "downwardapi-volume-886aa76e-9802-4386-b63c-3519cae54c76": Phase="Pending", Reason="", readiness=false. Elapsed: 10.374234ms Jun 25 23:42:55.799: INFO: Pod "downwardapi-volume-886aa76e-9802-4386-b63c-3519cae54c76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014463808s Jun 25 23:42:57.803: INFO: Pod "downwardapi-volume-886aa76e-9802-4386-b63c-3519cae54c76": Phase="Running", Reason="", readiness=true. Elapsed: 4.019199974s Jun 25 23:42:59.808: INFO: Pod "downwardapi-volume-886aa76e-9802-4386-b63c-3519cae54c76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024332714s STEP: Saw pod success Jun 25 23:42:59.809: INFO: Pod "downwardapi-volume-886aa76e-9802-4386-b63c-3519cae54c76" satisfied condition "Succeeded or Failed" Jun 25 23:42:59.812: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-886aa76e-9802-4386-b63c-3519cae54c76 container client-container: STEP: delete the pod Jun 25 23:42:59.897: INFO: Waiting for pod downwardapi-volume-886aa76e-9802-4386-b63c-3519cae54c76 to disappear Jun 25 23:42:59.908: INFO: Pod downwardapi-volume-886aa76e-9802-4386-b63c-3519cae54c76 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:42:59.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-690" for this suite. • [SLOW TEST:6.307 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":294,"completed":24,"skipped":306,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} S ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:42:59.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Jun 25 23:43:04.540: INFO: Successfully updated pod "labelsupdate412c3610-7a42-4d8b-afd7-2d92cccf8c1f" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:43:06.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-691" for this suite. • [SLOW TEST:6.664 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":294,"completed":25,"skipped":307,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:43:06.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 25 23:43:06.642: INFO: Waiting up to 5m0s for pod "pod-9d3e9f58-7776-4982-a947-9e197f0480bc" in namespace "emptydir-599" to be "Succeeded or Failed" Jun 25 23:43:06.704: INFO: Pod "pod-9d3e9f58-7776-4982-a947-9e197f0480bc": Phase="Pending", Reason="", readiness=false. Elapsed: 61.961284ms Jun 25 23:43:08.708: INFO: Pod "pod-9d3e9f58-7776-4982-a947-9e197f0480bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066110142s Jun 25 23:43:10.712: INFO: Pod "pod-9d3e9f58-7776-4982-a947-9e197f0480bc": Phase="Running", Reason="", readiness=true. Elapsed: 4.070033246s Jun 25 23:43:12.716: INFO: Pod "pod-9d3e9f58-7776-4982-a947-9e197f0480bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.074414788s STEP: Saw pod success Jun 25 23:43:12.716: INFO: Pod "pod-9d3e9f58-7776-4982-a947-9e197f0480bc" satisfied condition "Succeeded or Failed" Jun 25 23:43:12.719: INFO: Trying to get logs from node latest-worker pod pod-9d3e9f58-7776-4982-a947-9e197f0480bc container test-container: STEP: delete the pod Jun 25 23:43:12.751: INFO: Waiting for pod pod-9d3e9f58-7776-4982-a947-9e197f0480bc to disappear Jun 25 23:43:12.764: INFO: Pod pod-9d3e9f58-7776-4982-a947-9e197f0480bc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:43:12.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-599" for this suite. • [SLOW TEST:6.193 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":26,"skipped":322,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSS ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:43:12.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod Jun 25 23:45:13.418: INFO: Successfully updated pod "var-expansion-dd3d9e15-604d-453c-908b-5c49efa55eca" STEP: waiting for pod running STEP: deleting the pod gracefully Jun 25 23:45:15.452: INFO: Deleting pod "var-expansion-dd3d9e15-604d-453c-908b-5c49efa55eca" in namespace "var-expansion-4288" Jun 25 23:45:15.459: INFO: Wait up to 5m0s for pod "var-expansion-dd3d9e15-604d-453c-908b-5c49efa55eca" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:45:55.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4288" for this suite. • [SLOW TEST:162.728 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":294,"completed":27,"skipped":325,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:45:55.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Jun 25 23:45:55.545: INFO: >>> kubeConfig: /root/.kube/config Jun 25 23:45:58.508: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:46:09.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1628" for this suite. • [SLOW TEST:13.654 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":294,"completed":28,"skipped":342,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:46:09.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:46:15.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4230" for this suite. STEP: Destroying namespace "nsdeletetest-8608" for this suite. Jun 25 23:46:15.639: INFO: Namespace nsdeletetest-8608 was already deleted STEP: Destroying namespace "nsdeletetest-5361" for this suite. • [SLOW TEST:6.487 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":294,"completed":29,"skipped":353,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:46:15.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jun 25 23:46:19.778: INFO: &Pod{ObjectMeta:{send-events-caa94594-2068-41c9-a946-9ef419fb7420 events-3134 /api/v1/namespaces/events-3134/pods/send-events-caa94594-2068-41c9-a946-9ef419fb7420 5244d801-d79e-4826-b002-5a7a7e68f748 15903799 0 2020-06-25 23:46:15 +0000 UTC map[name:foo time:714675673] map[] [] [] [{e2e.test Update v1 2020-06-25 23:46:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-25 23:46:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.48\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zgrv5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zgrv5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zgrv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-25 23:46:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-25 23:46:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-25 23:46:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-25 23:46:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.48,StartTime:2020-06-25 23:46:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-25 23:46:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://5a407e91ef7a4582b656d293343a69f22b9854b45e33608fc3c1651a4a5115fc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.48,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Jun 25 23:46:21.782: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jun 25 23:46:23.786: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:46:23.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-3134" for this suite. • [SLOW TEST:8.174 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":294,"completed":30,"skipped":359,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:46:23.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 25 23:46:23.925: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f12839ca-25a3-46a1-be38-ef555b016afc" in namespace "downward-api-2728" to be "Succeeded or Failed" Jun 25 23:46:23.948: INFO: Pod "downwardapi-volume-f12839ca-25a3-46a1-be38-ef555b016afc": Phase="Pending", Reason="", readiness=false. Elapsed: 23.156786ms Jun 25 23:46:25.981: INFO: Pod "downwardapi-volume-f12839ca-25a3-46a1-be38-ef555b016afc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056081745s Jun 25 23:46:27.986: INFO: Pod "downwardapi-volume-f12839ca-25a3-46a1-be38-ef555b016afc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060740934s STEP: Saw pod success Jun 25 23:46:27.986: INFO: Pod "downwardapi-volume-f12839ca-25a3-46a1-be38-ef555b016afc" satisfied condition "Succeeded or Failed" Jun 25 23:46:27.989: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-f12839ca-25a3-46a1-be38-ef555b016afc container client-container: STEP: delete the pod Jun 25 23:46:28.036: INFO: Waiting for pod downwardapi-volume-f12839ca-25a3-46a1-be38-ef555b016afc to disappear Jun 25 23:46:28.232: INFO: Pod downwardapi-volume-f12839ca-25a3-46a1-be38-ef555b016afc no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:46:28.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2728" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":294,"completed":31,"skipped":391,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:46:28.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Jun 25 23:46:28.406: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7158' Jun 25 23:46:31.230: INFO: stderr: "" Jun 25 23:46:31.230: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 25 23:46:31.230: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7158' Jun 25 23:46:31.350: INFO: stderr: "" Jun 25 23:46:31.350: INFO: stdout: "update-demo-nautilus-7mctx update-demo-nautilus-7tfh6 " Jun 25 23:46:31.350: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7mctx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7158' Jun 25 23:46:31.441: INFO: stderr: "" Jun 25 23:46:31.441: INFO: stdout: "" Jun 25 23:46:31.441: INFO: update-demo-nautilus-7mctx is created but not running Jun 25 23:46:36.441: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7158' Jun 25 23:46:36.555: INFO: stderr: "" Jun 25 23:46:36.555: INFO: stdout: "update-demo-nautilus-7mctx update-demo-nautilus-7tfh6 " Jun 25 23:46:36.555: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7mctx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7158' Jun 25 23:46:36.651: INFO: stderr: "" Jun 25 23:46:36.651: INFO: stdout: "true" Jun 25 23:46:36.651: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7mctx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7158' Jun 25 23:46:36.744: INFO: stderr: "" Jun 25 23:46:36.744: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 25 23:46:36.744: INFO: validating pod update-demo-nautilus-7mctx Jun 25 23:46:36.754: INFO: got data: { "image": "nautilus.jpg" } Jun 25 23:46:36.755: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 25 23:46:36.755: INFO: update-demo-nautilus-7mctx is verified up and running Jun 25 23:46:36.755: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7tfh6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7158' Jun 25 23:46:36.861: INFO: stderr: "" Jun 25 23:46:36.861: INFO: stdout: "true" Jun 25 23:46:36.861: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7tfh6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7158' Jun 25 23:46:36.961: INFO: stderr: "" Jun 25 23:46:36.961: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 25 23:46:36.961: INFO: validating pod update-demo-nautilus-7tfh6 Jun 25 23:46:36.974: INFO: got data: { "image": "nautilus.jpg" } Jun 25 23:46:36.974: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 25 23:46:36.974: INFO: update-demo-nautilus-7tfh6 is verified up and running STEP: using delete to clean up resources Jun 25 23:46:36.974: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7158' Jun 25 23:46:37.093: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 25 23:46:37.093: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 25 23:46:37.093: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7158' Jun 25 23:46:37.196: INFO: stderr: "No resources found in kubectl-7158 namespace.\n" Jun 25 23:46:37.196: INFO: stdout: "" Jun 25 23:46:37.197: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7158 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 25 23:46:37.318: INFO: stderr: "" Jun 25 23:46:37.318: INFO: stdout: "update-demo-nautilus-7mctx\nupdate-demo-nautilus-7tfh6\n" Jun 25 23:46:37.818: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7158' Jun 25 23:46:38.051: INFO: stderr: "No resources found in kubectl-7158 namespace.\n" Jun 25 23:46:38.051: INFO: stdout: "" Jun 25 23:46:38.051: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7158 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 25 23:46:38.162: INFO: stderr: "" Jun 25 23:46:38.162: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:46:38.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7158" for this suite. • [SLOW TEST:9.927 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":294,"completed":32,"skipped":412,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:46:38.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create deployment with httpd image Jun 25 23:46:38.589: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f -' Jun 25 23:46:41.730: INFO: stderr: "" Jun 25 23:46:41.730: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Jun 25 23:46:41.730: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config diff -f -' Jun 25 23:46:46.547: INFO: rc: 1 Jun 25 23:46:46.547: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete -f -' Jun 25 23:46:46.735: INFO: stderr: "" Jun 25 23:46:46.735: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:46:46.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7877" for this suite. • [SLOW TEST:8.576 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl diff /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:871 should check if kubectl diff finds a difference for Deployments [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":294,"completed":33,"skipped":429,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:46:46.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 25 23:46:46.981: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eb4fb387-f063-4bbc-9387-2323f2c86206" in namespace "projected-8200" to be "Succeeded or Failed" Jun 25 23:46:46.986: INFO: Pod "downwardapi-volume-eb4fb387-f063-4bbc-9387-2323f2c86206": Phase="Pending", Reason="", readiness=false. Elapsed: 4.702094ms Jun 25 23:46:48.991: INFO: Pod "downwardapi-volume-eb4fb387-f063-4bbc-9387-2323f2c86206": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009701943s Jun 25 23:46:50.995: INFO: Pod "downwardapi-volume-eb4fb387-f063-4bbc-9387-2323f2c86206": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013584664s STEP: Saw pod success Jun 25 23:46:50.995: INFO: Pod "downwardapi-volume-eb4fb387-f063-4bbc-9387-2323f2c86206" satisfied condition "Succeeded or Failed" Jun 25 23:46:50.998: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-eb4fb387-f063-4bbc-9387-2323f2c86206 container client-container: STEP: delete the pod Jun 25 23:46:51.049: INFO: Waiting for pod downwardapi-volume-eb4fb387-f063-4bbc-9387-2323f2c86206 to disappear Jun 25 23:46:51.056: INFO: Pod downwardapi-volume-eb4fb387-f063-4bbc-9387-2323f2c86206 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:46:51.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8200" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":294,"completed":34,"skipped":443,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:46:51.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs Jun 25 23:46:51.144: INFO: Waiting up to 5m0s for pod "pod-e63a7f29-be2a-4323-8164-890f49294b3b" in namespace "emptydir-1296" to be "Succeeded or Failed" Jun 25 23:46:51.164: INFO: Pod "pod-e63a7f29-be2a-4323-8164-890f49294b3b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.228981ms Jun 25 23:46:53.168: INFO: Pod "pod-e63a7f29-be2a-4323-8164-890f49294b3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023743858s Jun 25 23:46:55.172: INFO: Pod "pod-e63a7f29-be2a-4323-8164-890f49294b3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027781353s STEP: Saw pod success Jun 25 23:46:55.172: INFO: Pod "pod-e63a7f29-be2a-4323-8164-890f49294b3b" satisfied condition "Succeeded or Failed" Jun 25 23:46:55.175: INFO: Trying to get logs from node latest-worker2 pod pod-e63a7f29-be2a-4323-8164-890f49294b3b container test-container: STEP: delete the pod Jun 25 23:46:55.360: INFO: Waiting for pod pod-e63a7f29-be2a-4323-8164-890f49294b3b to disappear Jun 25 23:46:55.418: INFO: Pod pod-e63a7f29-be2a-4323-8164-890f49294b3b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:46:55.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1296" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":35,"skipped":471,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:46:55.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-xkmx STEP: Creating a pod to test atomic-volume-subpath Jun 25 23:46:55.640: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-xkmx" in namespace "subpath-5876" to be "Succeeded or Failed" Jun 25 23:46:55.645: INFO: Pod "pod-subpath-test-projected-xkmx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.557594ms Jun 25 23:46:57.648: INFO: Pod "pod-subpath-test-projected-xkmx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007947199s Jun 25 23:46:59.653: INFO: Pod "pod-subpath-test-projected-xkmx": Phase="Running", Reason="", readiness=true. Elapsed: 4.013033875s Jun 25 23:47:01.657: INFO: Pod "pod-subpath-test-projected-xkmx": Phase="Running", Reason="", readiness=true. Elapsed: 6.017099614s Jun 25 23:47:03.660: INFO: Pod "pod-subpath-test-projected-xkmx": Phase="Running", Reason="", readiness=true. Elapsed: 8.019991493s Jun 25 23:47:05.664: INFO: Pod "pod-subpath-test-projected-xkmx": Phase="Running", Reason="", readiness=true. Elapsed: 10.023615031s Jun 25 23:47:07.667: INFO: Pod "pod-subpath-test-projected-xkmx": Phase="Running", Reason="", readiness=true. Elapsed: 12.027110869s Jun 25 23:47:09.671: INFO: Pod "pod-subpath-test-projected-xkmx": Phase="Running", Reason="", readiness=true. Elapsed: 14.03079581s Jun 25 23:47:11.706: INFO: Pod "pod-subpath-test-projected-xkmx": Phase="Running", Reason="", readiness=true. Elapsed: 16.065403044s Jun 25 23:47:13.736: INFO: Pod "pod-subpath-test-projected-xkmx": Phase="Running", Reason="", readiness=true. Elapsed: 18.095456146s Jun 25 23:47:15.740: INFO: Pod "pod-subpath-test-projected-xkmx": Phase="Running", Reason="", readiness=true. Elapsed: 20.099637067s Jun 25 23:47:17.748: INFO: Pod "pod-subpath-test-projected-xkmx": Phase="Running", Reason="", readiness=true. Elapsed: 22.107381521s Jun 25 23:47:19.768: INFO: Pod "pod-subpath-test-projected-xkmx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.12780649s STEP: Saw pod success Jun 25 23:47:19.768: INFO: Pod "pod-subpath-test-projected-xkmx" satisfied condition "Succeeded or Failed" Jun 25 23:47:19.771: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-projected-xkmx container test-container-subpath-projected-xkmx: STEP: delete the pod Jun 25 23:47:19.839: INFO: Waiting for pod pod-subpath-test-projected-xkmx to disappear Jun 25 23:47:19.848: INFO: Pod pod-subpath-test-projected-xkmx no longer exists STEP: Deleting pod pod-subpath-test-projected-xkmx Jun 25 23:47:19.848: INFO: Deleting pod "pod-subpath-test-projected-xkmx" in namespace "subpath-5876" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:47:19.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5876" for this suite. • [SLOW TEST:24.412 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":294,"completed":36,"skipped":509,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSS ------------------------------ [sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:47:19.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support building a client with a CSR [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 25 23:47:20.100: INFO: creating CSR Jun 25 23:47:20.103: FAIL: Unexpected error: <*errors.StatusError | 0xc002cd9860>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred Full Stack Trace k8s.io/kubernetes/test/e2e/auth.glob..func2.1() /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/certificates.go:117 +0xaa6 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00280c600) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x360 k8s.io/kubernetes/test/e2e.TestE2E(0xc00280c600) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:141 +0x2b testing.tRunner(0xc00280c600, 0x4e37068) /usr/local/go/src/testing/testing.go:909 +0xc9 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:960 +0x350 [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "certificates-1914". STEP: Found 0 events. Jun 25 23:47:20.124: INFO: POD NODE PHASE GRACE CONDITIONS Jun 25 23:47:20.124: INFO: Jun 25 23:47:20.128: INFO: Logging node info for node latest-control-plane Jun 25 23:47:20.130: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane /api/v1/nodes/latest-control-plane b7c23ecc-1548-479e-83f7-eb5444fbe13d 15902948 0 2020-04-29 09:53:29 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2020-04-29 09:53:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2020-04-29 09:54:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2020-06-25 23:42:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-06-25 23:42:43 +0000 UTC,LastTransitionTime:2020-04-29 09:53:26 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-06-25 23:42:43 +0000 UTC,LastTransitionTime:2020-04-29 09:53:26 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-06-25 23:42:43 +0000 UTC,LastTransitionTime:2020-04-29 09:53:26 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-06-25 23:42:43 +0000 UTC,LastTransitionTime:2020-04-29 09:54:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.11,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3939cf129c9d4d6e85e611ab996d9137,SystemUUID:2573ae1d-4849-412e-9a34-432f95556990,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.18.2,KubeProxyVersion:v1.18.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.18.2],SizeBytes:146648881,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.18.2],SizeBytes:132860030,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.18.2],SizeBytes:132826433,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.18.2],SizeBytes:113095985,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.7],SizeBytes:43921887,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 25 23:47:20.131: INFO: Logging kubelet events for node latest-control-plane Jun 25 23:47:20.133: INFO: Logging pods the kubelet thinks is on node latest-control-plane Jun 25 23:47:20.157: INFO: kube-scheduler-latest-control-plane started at 2020-04-29 09:53:36 +0000 UTC (0+1 container statuses recorded) Jun 25 23:47:20.157: INFO: Container kube-scheduler ready: true, restart count 115 Jun 25 23:47:20.157: INFO: kube-controller-manager-latest-control-plane started at 2020-04-29 09:53:36 +0000 UTC (0+1 container statuses recorded) Jun 25 23:47:20.157: INFO: Container kube-controller-manager ready: true, restart count 119 Jun 25 23:47:20.157: INFO: kube-proxy-h8mhz started at 2020-04-29 09:53:54 +0000 UTC (0+1 container statuses recorded) Jun 25 23:47:20.157: INFO: Container kube-proxy ready: true, restart count 0 Jun 25 23:47:20.157: INFO: coredns-66bff467f8-qr7l5 started at 2020-04-29 09:54:10 +0000 UTC (0+1 container statuses recorded) Jun 25 23:47:20.157: INFO: Container coredns ready: true, restart count 0 Jun 25 23:47:20.157: INFO: etcd-latest-control-plane started at 2020-04-29 09:53:36 +0000 UTC (0+1 container statuses recorded) Jun 25 23:47:20.157: INFO: Container etcd ready: true, restart count 4 Jun 25 23:47:20.157: INFO: kube-apiserver-latest-control-plane started at 2020-04-29 09:53:36 +0000 UTC (0+1 container statuses recorded) Jun 25 23:47:20.157: INFO: Container kube-apiserver ready: true, restart count 2 Jun 25 23:47:20.157: INFO: kindnet-8x7pf started at 2020-04-29 09:53:53 +0000 UTC (0+1 container statuses recorded) Jun 25 23:47:20.157: INFO: Container kindnet-cni ready: true, restart count 4 Jun 25 23:47:20.157: INFO: coredns-66bff467f8-8n5vh started at 2020-04-29 09:54:06 +0000 UTC (0+1 container statuses recorded) Jun 25 23:47:20.157: INFO: Container coredns ready: true, restart count 0 Jun 25 23:47:20.157: INFO: local-path-provisioner-bd4bb6b75-bmf2h started at 2020-04-29 09:54:06 +0000 UTC (0+1 container statuses recorded) Jun 25 23:47:20.157: INFO: Container local-path-provisioner ready: true, restart count 87 W0625 23:47:20.161847 8 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 25 23:47:20.254: INFO: Latency metrics for node latest-control-plane Jun 25 23:47:20.254: INFO: Logging node info for node latest-worker Jun 25 23:47:20.257: INFO: Node Info: &Node{ObjectMeta:{latest-worker /api/v1/nodes/latest-worker 2f09bb79-b24c-46f4-8a0d-ace124db698c 15903794 0 2020-04-29 09:54:07 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2020-04-29 09:54:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2020-04-29 09:54:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2020-06-25 23:46:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-06-25 23:46:17 +0000 UTC,LastTransitionTime:2020-04-29 09:54:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-06-25 23:46:17 +0000 UTC,LastTransitionTime:2020-04-29 09:54:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-06-25 23:46:17 +0000 UTC,LastTransitionTime:2020-04-29 09:54:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-06-25 23:46:17 +0000 UTC,LastTransitionTime:2020-04-29 09:54:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.13,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:83dc4a3bd84a4693999c93a6c8c5f678,SystemUUID:66e94596-e77d-487e-8e4a-bc652b040cea,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.18.2,KubeProxyVersion:v1.18.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:07e93f55decdc1224fb8d161edb5617d58e3488c1250168337548ccc3e82f6b7 docker.io/ollivier/clearwater-cassandra:latest],SizeBytes:386164043,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:141a336f17eaf068dbe8da4b01a832033aed5c09e7fa6349ec091ee30b76c9b1 docker.io/ollivier/clearwater-homestead-prov:latest],SizeBytes:360403156,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:8c84761d2d906e344bc6a85a11451d35696cf684305555611df16ce2615ac816 docker.io/ollivier/clearwater-ellis:latest],SizeBytes:351094667,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:19c6d11d2678c44822f07c01c574fed426e3c99003b6af0410f0911d57939d5a docker.io/ollivier/clearwater-homer:latest],SizeBytes:343984685,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:f365f3b72267bef0fd696e4a93c0f3c19fb65ad42a8850fe22873dbadd03fdba docker.io/ollivier/clearwater-astaire:latest],SizeBytes:326777758,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:eb98596100b1553c9814b6185863ec53e743eb0370faeeafe16fc1dfe8d02ec3 docker.io/ollivier/clearwater-bono:latest],SizeBytes:303283801,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:44590682de48854faeccc1f4c7de39cb666014a0c4e3abd93adcccad3208a6e2 docker.io/ollivier/clearwater-sprout:latest],SizeBytes:298307172,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:0b3c89ab451b09e347657d5f85ed99d47ec3e8689b98916af72b23576926b08d docker.io/ollivier/clearwater-homestead:latest],SizeBytes:294847386,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:20069a8d9f366dd0f003afa7c4fbcbcd5e9d2b99abae83540c6538fc7cff6b97 docker.io/ollivier/clearwater-ralf:latest],SizeBytes:287124270,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:8ddcfa68c82ebf0b4ce6add019a8f57c024aec453f47a37017cf7dff8680268a docker.io/ollivier/clearwater-chronos:latest],SizeBytes:285184449,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.18.2],SizeBytes:146648881,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.18.2],SizeBytes:132860030,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.18.2],SizeBytes:132826433,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:9e6d47f5fb42621781fac92b9f8f86a7e00596fd5c022472a51d33b8c6638b85 docker.io/aquasec/kube-hunter:latest],SizeBytes:126124611,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:5a7b70d343cfaeff79f6e6a8f473983a5eb7ca52f723aa8aa226aad4ee5b96e3],SizeBytes:125323634,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:795d89480038d62363491066edd962a3f0042c338d4d9feb3f4db23ac659fb40],SizeBytes:124499152,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.18.2],SizeBytes:113095985,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12f377200949c25fde1e54bba639d34d119edd7cfcfb1d117526dba677c03c85 k8s.gcr.io/etcd:3.4.7],SizeBytes:104221097,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:c2efaddff058c146b93517d06a3a8066b6e88fecdd98fa6847cb69db22555f04 docker.io/ollivier/clearwater-live-test:latest],SizeBytes:46948523,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9 us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13],SizeBytes:45704260,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.7],SizeBytes:43921887,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:1e2b01ec091289327cd7e1b527c11b95db710ace489c9bd665c0d771c0225729],SizeBytes:8039938,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:9d86125c0409a16346857dbda530cf29583c87f186281745f539c12e3dcd38a7],SizeBytes:8039918,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:c42be6eafdbe71363ad6a2035fe53f12dbe36aab19a1a3c015231e97cd11d986],SizeBytes:8039911,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:bdfc3a8aeed63e545ab0df01806707219ffb785bca75e08cbee043075dedfb3c],SizeBytes:8039898,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:a3fe72ad3946d830134b92e5c922a92d4aeb594f0445d178f9e2d610b1be04b5],SizeBytes:8039861,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:6da1996cf654bbc10175028832d6ffb92720272d0deca971bb296ea9092f4273],SizeBytes:8039845,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:cab37ac2de78ddbc6655eddae38239ebafdf79a7934bc53361e1524a2ed5ab56 docker.io/aquasec/kube-bench:latest],SizeBytes:8035885,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:ee55386ef35bea93a3a0900fd714038bebd156e0448addf839f38093dbbaace9],SizeBytes:8029111,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:a8cf7ff6367c2afa2a90acd081b484cbded349a7076e7bdf37a05279f276bc12],SizeBytes:764955,},ContainerImage{Names:[docker.io/library/busybox@sha256:836945da1f3afe2cfff376d379852bbb82e0237cb2925d53a13f53d6e8a8c48c docker.io/library/busybox@sha256:52cfc475afdd697afd2dbe1a3761c8001bf3ba39f76819c922128c088869d339],SizeBytes:764948,},ContainerImage{Names:[docker.io/library/busybox@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209 docker.io/library/busybox:latest],SizeBytes:764556,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 25 23:47:20.258: INFO: Logging kubelet events for node latest-worker Jun 25 23:47:20.260: INFO: Logging pods the kubelet thinks is on node latest-worker Jun 25 23:47:20.266: INFO: kindnet-hg2tf started at 2020-04-29 09:54:13 +0000 UTC (0+1 container statuses recorded) Jun 25 23:47:20.266: INFO: Container kindnet-cni ready: true, restart count 5 Jun 25 23:47:20.266: INFO: rally-c184502e-30nwopzm started at 2020-05-11 08:48:25 +0000 UTC (0+1 container statuses recorded) Jun 25 23:47:20.266: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 Jun 25 23:47:20.266: INFO: rally-c184502e-30nwopzm-7fmqm started at 2020-05-11 08:48:29 +0000 UTC (0+1 container statuses recorded) Jun 25 23:47:20.266: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 Jun 25 23:47:20.266: INFO: kube-proxy-c8n27 started at 2020-04-29 09:54:13 +0000 UTC (0+1 container statuses recorded) Jun 25 23:47:20.266: INFO: Container kube-proxy ready: true, restart count 0 W0625 23:47:20.270670 8 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 25 23:47:20.325: INFO: Latency metrics for node latest-worker Jun 25 23:47:20.325: INFO: Logging node info for node latest-worker2 Jun 25 23:47:20.329: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 /api/v1/nodes/latest-worker2 edb8c16e-16f9-40fa-97b0-84ba80a01b1f 15903621 0 2020-04-29 09:54:08 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2020-04-29 09:54:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2020-04-29 09:54:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2020-06-25 23:45:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-06-25 23:45:43 +0000 UTC,LastTransitionTime:2020-04-29 09:54:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-06-25 23:45:43 +0000 UTC,LastTransitionTime:2020-04-29 09:54:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-06-25 23:45:43 +0000 UTC,LastTransitionTime:2020-04-29 09:54:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-06-25 23:45:43 +0000 UTC,LastTransitionTime:2020-04-29 09:54:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.12,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a92a0b35db3a4f1fb7e74bf96e498c99,SystemUUID:8fa82d10-b80f-4f70-a9ff-665f94ff4ecc,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.18.2,KubeProxyVersion:v1.18.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:07e93f55decdc1224fb8d161edb5617d58e3488c1250168337548ccc3e82f6b7 docker.io/ollivier/clearwater-cassandra:latest],SizeBytes:386164043,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:141a336f17eaf068dbe8da4b01a832033aed5c09e7fa6349ec091ee30b76c9b1 docker.io/ollivier/clearwater-homestead-prov:latest],SizeBytes:360403156,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:8c84761d2d906e344bc6a85a11451d35696cf684305555611df16ce2615ac816 docker.io/ollivier/clearwater-ellis:latest],SizeBytes:351094667,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:19c6d11d2678c44822f07c01c574fed426e3c99003b6af0410f0911d57939d5a docker.io/ollivier/clearwater-homer:latest],SizeBytes:343984685,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:f365f3b72267bef0fd696e4a93c0f3c19fb65ad42a8850fe22873dbadd03fdba docker.io/ollivier/clearwater-astaire:latest],SizeBytes:326777758,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:eb98596100b1553c9814b6185863ec53e743eb0370faeeafe16fc1dfe8d02ec3 docker.io/ollivier/clearwater-bono:latest],SizeBytes:303283801,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:44590682de48854faeccc1f4c7de39cb666014a0c4e3abd93adcccad3208a6e2 docker.io/ollivier/clearwater-sprout:latest],SizeBytes:298307172,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:0b3c89ab451b09e347657d5f85ed99d47ec3e8689b98916af72b23576926b08d docker.io/ollivier/clearwater-homestead:latest],SizeBytes:294847386,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:20069a8d9f366dd0f003afa7c4fbcbcd5e9d2b99abae83540c6538fc7cff6b97 docker.io/ollivier/clearwater-ralf:latest],SizeBytes:287124270,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:8ddcfa68c82ebf0b4ce6add019a8f57c024aec453f47a37017cf7dff8680268a docker.io/ollivier/clearwater-chronos:latest],SizeBytes:285184449,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.18.2],SizeBytes:146648881,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.18.2],SizeBytes:132860030,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.18.2],SizeBytes:132826433,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:d0af3efaa83cf2106879b7fd3972faaee44a0d4a82db97b27f33f8c71aa450b3 docker.io/aquasec/kube-hunter:latest],SizeBytes:127384616,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:9e6d47f5fb42621781fac92b9f8f86a7e00596fd5c022472a51d33b8c6638b85],SizeBytes:126124611,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:5a7b70d343cfaeff79f6e6a8f473983a5eb7ca52f723aa8aa226aad4ee5b96e3],SizeBytes:125323634,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:795d89480038d62363491066edd962a3f0042c338d4d9feb3f4db23ac659fb40],SizeBytes:124499152,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.18.2],SizeBytes:113095985,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12f377200949c25fde1e54bba639d34d119edd7cfcfb1d117526dba677c03c85 k8s.gcr.io/etcd:3.4.7],SizeBytes:104221097,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:c2efaddff058c146b93517d06a3a8066b6e88fecdd98fa6847cb69db22555f04 docker.io/ollivier/clearwater-live-test:latest],SizeBytes:46948523,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9 us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13],SizeBytes:45704260,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.7],SizeBytes:43921887,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:1e2b01ec091289327cd7e1b527c11b95db710ace489c9bd665c0d771c0225729],SizeBytes:8039938,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:9d86125c0409a16346857dbda530cf29583c87f186281745f539c12e3dcd38a7],SizeBytes:8039918,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:bdfc3a8aeed63e545ab0df01806707219ffb785bca75e08cbee043075dedfb3c],SizeBytes:8039898,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:a3fe72ad3946d830134b92e5c922a92d4aeb594f0445d178f9e2d610b1be04b5 docker.io/aquasec/kube-bench:latest],SizeBytes:8039861,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:ee55386ef35bea93a3a0900fd714038bebd156e0448addf839f38093dbbaace9],SizeBytes:8029111,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:a8cf7ff6367c2afa2a90acd081b484cbded349a7076e7bdf37a05279f276bc12],SizeBytes:764955,},ContainerImage{Names:[docker.io/library/busybox@sha256:52cfc475afdd697afd2dbe1a3761c8001bf3ba39f76819c922128c088869d339 docker.io/library/busybox@sha256:836945da1f3afe2cfff376d379852bbb82e0237cb2925d53a13f53d6e8a8c48c],SizeBytes:764948,},ContainerImage{Names:[docker.io/library/busybox@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209 docker.io/library/busybox:latest],SizeBytes:764556,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 25 23:47:20.330: INFO: Logging kubelet events for node latest-worker2 Jun 25 23:47:20.332: INFO: Logging pods the kubelet thinks is on node latest-worker2 Jun 25 23:47:20.337: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 started at 2020-05-12 09:11:35 +0000 UTC (0+1 container statuses recorded) Jun 25 23:47:20.337: INFO: Container terminate-cmd-rpa ready: true, restart count 2 Jun 25 23:47:20.337: INFO: kindnet-jl4dn started at 2020-04-29 09:54:11 +0000 UTC (0+1 container statuses recorded) Jun 25 23:47:20.337: INFO: Container kindnet-cni ready: true, restart count 5 Jun 25 23:47:20.337: INFO: kube-proxy-pcmmp started at 2020-04-29 09:54:11 +0000 UTC (0+1 container statuses recorded) Jun 25 23:47:20.337: INFO: Container kube-proxy ready: true, restart count 0 Jun 25 23:47:20.337: INFO: rally-c184502e-ept97j69-6xvbj started at 2020-05-11 08:48:03 +0000 UTC (0+1 container statuses recorded) Jun 25 23:47:20.337: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 W0625 23:47:20.340410 8 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 25 23:47:20.387: INFO: Latency metrics for node latest-worker2 Jun 25 23:47:20.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-1914" for this suite. • Failure [0.537 seconds] [sig-auth] Certificates API [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should support building a client with a CSR [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 25 23:47:20.103: Unexpected error: <*errors.StatusError | 0xc002cd9860>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/certificates.go:117 ------------------------------ {"msg":"FAILED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]","total":294,"completed":36,"skipped":514,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:47:20.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Jun 25 23:47:25.030: INFO: Successfully updated pod "labelsupdatebd5708a3-ee1e-4fa1-98cc-33c077c21c0e" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:47:27.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7417" for this suite. • [SLOW TEST:6.676 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":294,"completed":37,"skipped":538,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:47:27.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 25 23:47:31.216: INFO: Waiting up to 5m0s for pod "client-envvars-a7c41977-c680-4a42-8ddc-544e80ea95fe" in namespace "pods-3730" to be "Succeeded or Failed" Jun 25 23:47:31.263: INFO: Pod "client-envvars-a7c41977-c680-4a42-8ddc-544e80ea95fe": Phase="Pending", Reason="", readiness=false. Elapsed: 47.359397ms Jun 25 23:47:33.268: INFO: Pod "client-envvars-a7c41977-c680-4a42-8ddc-544e80ea95fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052277101s Jun 25 23:47:35.273: INFO: Pod "client-envvars-a7c41977-c680-4a42-8ddc-544e80ea95fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057082263s STEP: Saw pod success Jun 25 23:47:35.273: INFO: Pod "client-envvars-a7c41977-c680-4a42-8ddc-544e80ea95fe" satisfied condition "Succeeded or Failed" Jun 25 23:47:35.276: INFO: Trying to get logs from node latest-worker pod client-envvars-a7c41977-c680-4a42-8ddc-544e80ea95fe container env3cont: STEP: delete the pod Jun 25 23:47:35.293: INFO: Waiting for pod client-envvars-a7c41977-c680-4a42-8ddc-544e80ea95fe to disappear Jun 25 23:47:35.298: INFO: Pod client-envvars-a7c41977-c680-4a42-8ddc-544e80ea95fe no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:47:35.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3730" for this suite. • [SLOW TEST:8.263 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":294,"completed":38,"skipped":561,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:47:35.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jun 25 23:47:35.423: INFO: Pod name pod-release: Found 0 pods out of 1 Jun 25 23:47:40.436: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:47:40.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-454" for this suite. • [SLOW TEST:5.298 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":294,"completed":39,"skipped":564,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:47:40.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 25 23:47:40.693: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1478' Jun 25 23:47:43.873: INFO: stderr: "" Jun 25 23:47:43.873: INFO: stdout: "replicationcontroller/agnhost-master created\n" Jun 25 23:47:43.873: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1478' Jun 25 23:47:48.328: INFO: stderr: "" Jun 25 23:47:48.328: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jun 25 23:47:49.333: INFO: Selector matched 1 pods for map[app:agnhost] Jun 25 23:47:49.333: INFO: Found 0 / 1 Jun 25 23:47:50.332: INFO: Selector matched 1 pods for map[app:agnhost] Jun 25 23:47:50.332: INFO: Found 1 / 1 Jun 25 23:47:50.332: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 25 23:47:50.335: INFO: Selector matched 1 pods for map[app:agnhost] Jun 25 23:47:50.335: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 25 23:47:50.335: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe pod agnhost-master-slftm --namespace=kubectl-1478' Jun 25 23:47:50.474: INFO: stderr: "" Jun 25 23:47:50.474: INFO: stdout: "Name: agnhost-master-slftm\nNamespace: kubectl-1478\nPriority: 0\nNode: latest-worker/172.17.0.13\nStart Time: Thu, 25 Jun 2020 23:47:43 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.54\nIPs:\n IP: 10.244.1.54\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://d60f863a74dec067ef4ba6060d30d32a488f593c8a3100f198317b0e62357df4\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 25 Jun 2020 23:47:49 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-bm9v6 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-bm9v6:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-bm9v6\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 6s default-scheduler Successfully assigned kubectl-1478/agnhost-master-slftm to latest-worker\n Normal Pulled 3s kubelet, latest-worker Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\" already present on machine\n Normal Created 1s kubelet, latest-worker Created container agnhost-master\n Normal Started 1s kubelet, latest-worker Started container agnhost-master\n" Jun 25 23:47:50.474: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-1478' Jun 25 23:47:50.622: INFO: stderr: "" Jun 25 23:47:50.622: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-1478\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 7s replication-controller Created pod: agnhost-master-slftm\n" Jun 25 23:47:50.622: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-1478' Jun 25 23:47:50.725: INFO: stderr: "" Jun 25 23:47:50.725: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-1478\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.110.200.178\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.54:6379\nSession Affinity: None\nEvents: \n" Jun 25 23:47:50.728: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe node latest-control-plane' Jun 25 23:47:50.873: INFO: stderr: "" Jun 25 23:47:50.873: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 29 Apr 2020 09:53:29 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Thu, 25 Jun 2020 23:47:49 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Thu, 25 Jun 2020 23:47:43 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 25 Jun 2020 23:47:43 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 25 Jun 2020 23:47:43 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 25 Jun 2020 23:47:43 +0000 Wed, 29 Apr 2020 09:54:06 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3939cf129c9d4d6e85e611ab996d9137\n System UUID: 2573ae1d-4849-412e-9a34-432f95556990\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.18.2\n Kube-Proxy Version: v1.18.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-66bff467f8-8n5vh 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 57d\n kube-system coredns-66bff467f8-qr7l5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 57d\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 57d\n kube-system kindnet-8x7pf 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 57d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 57d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 57d\n kube-system kube-proxy-h8mhz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 57d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 57d\n local-path-storage local-path-provisioner-bd4bb6b75-bmf2h 0 (0%) 0 (0%) 0 (0%) 0 (0%) 57d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Jun 25 23:47:50.873: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe namespace kubectl-1478' Jun 25 23:47:50.982: INFO: stderr: "" Jun 25 23:47:50.982: INFO: stdout: "Name: kubectl-1478\nLabels: e2e-framework=kubectl\n e2e-run=b15b0c1e-dce6-4226-8447-6b2b37a23b07\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:47:50.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1478" for this suite. • [SLOW TEST:10.413 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1088 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":294,"completed":40,"skipped":587,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:47:51.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:809 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-2874 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-2874 STEP: creating replication controller externalsvc in namespace services-2874 I0625 23:47:51.237299 8 runners.go:190] Created replication controller with name: externalsvc, namespace: services-2874, replica count: 2 I0625 23:47:54.287753 8 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0625 23:47:57.288030 8 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Jun 25 23:47:57.353: INFO: Creating new exec pod Jun 25 23:48:01.371: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2874 execpodlddvw -- /bin/sh -x -c nslookup clusterip-service' Jun 25 23:48:01.738: INFO: stderr: "I0625 23:48:01.491262 687 log.go:172] (0xc000a50f20) (0xc00056d860) Create stream\nI0625 23:48:01.491331 687 log.go:172] (0xc000a50f20) (0xc00056d860) Stream added, broadcasting: 1\nI0625 23:48:01.494324 687 log.go:172] (0xc000a50f20) Reply frame received for 1\nI0625 23:48:01.494359 687 log.go:172] (0xc000a50f20) (0xc0005d2320) Create stream\nI0625 23:48:01.494368 687 log.go:172] (0xc000a50f20) (0xc0005d2320) Stream added, broadcasting: 3\nI0625 23:48:01.495206 687 log.go:172] (0xc000a50f20) Reply frame received for 3\nI0625 23:48:01.495226 687 log.go:172] (0xc000a50f20) (0xc0002c75e0) Create stream\nI0625 23:48:01.495626 687 log.go:172] (0xc000a50f20) (0xc0002c75e0) Stream added, broadcasting: 5\nI0625 23:48:01.497947 687 log.go:172] (0xc000a50f20) Reply frame received for 5\nI0625 23:48:01.614500 687 log.go:172] (0xc000a50f20) Data frame received for 5\nI0625 23:48:01.614534 687 log.go:172] (0xc0002c75e0) (5) Data frame handling\nI0625 23:48:01.614560 687 log.go:172] (0xc0002c75e0) (5) Data frame sent\n+ nslookup clusterip-service\nI0625 23:48:01.726269 687 log.go:172] (0xc000a50f20) Data frame received for 3\nI0625 23:48:01.726300 687 log.go:172] (0xc0005d2320) (3) Data frame handling\nI0625 23:48:01.726325 687 log.go:172] (0xc0005d2320) (3) Data frame sent\nI0625 23:48:01.727313 687 log.go:172] (0xc000a50f20) Data frame received for 3\nI0625 23:48:01.727331 687 log.go:172] (0xc0005d2320) (3) Data frame handling\nI0625 23:48:01.727350 687 log.go:172] (0xc0005d2320) (3) Data frame sent\nI0625 23:48:01.728057 687 log.go:172] (0xc000a50f20) Data frame received for 5\nI0625 23:48:01.728091 687 log.go:172] (0xc0002c75e0) (5) Data frame handling\nI0625 23:48:01.728125 687 log.go:172] (0xc000a50f20) Data frame received for 3\nI0625 23:48:01.728146 687 log.go:172] (0xc0005d2320) (3) Data frame handling\nI0625 23:48:01.730710 687 log.go:172] (0xc000a50f20) Data frame received for 1\nI0625 23:48:01.730731 687 log.go:172] (0xc00056d860) (1) Data frame handling\nI0625 23:48:01.730753 687 log.go:172] (0xc00056d860) (1) Data frame sent\nI0625 23:48:01.730773 687 log.go:172] (0xc000a50f20) (0xc00056d860) Stream removed, broadcasting: 1\nI0625 23:48:01.730882 687 log.go:172] (0xc000a50f20) Go away received\nI0625 23:48:01.731244 687 log.go:172] (0xc000a50f20) (0xc00056d860) Stream removed, broadcasting: 1\nI0625 23:48:01.731277 687 log.go:172] (0xc000a50f20) (0xc0005d2320) Stream removed, broadcasting: 3\nI0625 23:48:01.731297 687 log.go:172] (0xc000a50f20) (0xc0002c75e0) Stream removed, broadcasting: 5\n" Jun 25 23:48:01.738: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-2874.svc.cluster.local\tcanonical name = externalsvc.services-2874.svc.cluster.local.\nName:\texternalsvc.services-2874.svc.cluster.local\nAddress: 10.100.187.33\n\n" STEP: deleting ReplicationController externalsvc in namespace services-2874, will wait for the garbage collector to delete the pods Jun 25 23:48:01.798: INFO: Deleting ReplicationController externalsvc took: 6.386335ms Jun 25 23:48:01.898: INFO: Terminating ReplicationController externalsvc pods took: 100.22692ms Jun 25 23:48:15.345: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:48:15.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2874" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:813 • [SLOW TEST:24.366 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":294,"completed":41,"skipped":620,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:48:15.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 25 23:48:16.288: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 25 23:48:18.356: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728725696, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728725696, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728725696, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728725696, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 25 23:48:20.360: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728725696, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728725696, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728725696, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728725696, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 25 23:48:23.486: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:48:23.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8244" for this suite. STEP: Destroying namespace "webhook-8244-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.429 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":294,"completed":42,"skipped":630,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:48:23.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 25 23:48:23.896: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 25 23:48:23.913: INFO: Waiting for terminating namespaces to be deleted... Jun 25 23:48:23.915: INFO: Logging pods the apiserver thinks is on node latest-worker before test Jun 25 23:48:23.919: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) Jun 25 23:48:23.919: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 Jun 25 23:48:23.919: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) Jun 25 23:48:23.919: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 Jun 25 23:48:23.919: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jun 25 23:48:23.919: INFO: Container kindnet-cni ready: true, restart count 5 Jun 25 23:48:23.919: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jun 25 23:48:23.919: INFO: Container kube-proxy ready: true, restart count 0 Jun 25 23:48:23.919: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Jun 25 23:48:23.947: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) Jun 25 23:48:23.947: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 Jun 25 23:48:23.947: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) Jun 25 23:48:23.947: INFO: Container terminate-cmd-rpa ready: true, restart count 2 Jun 25 23:48:23.947: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jun 25 23:48:23.947: INFO: Container kindnet-cni ready: true, restart count 5 Jun 25 23:48:23.947: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jun 25 23:48:23.947: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-54f942ea-db62-4524-8d89-859854afaa03 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-54f942ea-db62-4524-8d89-859854afaa03 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-54f942ea-db62-4524-8d89-859854afaa03 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:48:40.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5383" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.393 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":294,"completed":43,"skipped":634,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:48:40.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Jun 25 23:48:40.364: INFO: Waiting up to 5m0s for pod "downward-api-8995c7ca-2b65-46bd-8202-385d21e2ccdc" in namespace "downward-api-9695" to be "Succeeded or Failed" Jun 25 23:48:40.369: INFO: Pod "downward-api-8995c7ca-2b65-46bd-8202-385d21e2ccdc": Phase="Pending", Reason="", readiness=false. Elapsed: 5.611763ms Jun 25 23:48:42.437: INFO: Pod "downward-api-8995c7ca-2b65-46bd-8202-385d21e2ccdc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073280304s Jun 25 23:48:44.441: INFO: Pod "downward-api-8995c7ca-2b65-46bd-8202-385d21e2ccdc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077606423s STEP: Saw pod success Jun 25 23:48:44.441: INFO: Pod "downward-api-8995c7ca-2b65-46bd-8202-385d21e2ccdc" satisfied condition "Succeeded or Failed" Jun 25 23:48:44.444: INFO: Trying to get logs from node latest-worker2 pod downward-api-8995c7ca-2b65-46bd-8202-385d21e2ccdc container dapi-container: STEP: delete the pod Jun 25 23:48:44.517: INFO: Waiting for pod downward-api-8995c7ca-2b65-46bd-8202-385d21e2ccdc to disappear Jun 25 23:48:44.529: INFO: Pod downward-api-8995c7ca-2b65-46bd-8202-385d21e2ccdc no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:48:44.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9695" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":294,"completed":44,"skipped":642,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:48:44.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 25 23:48:44.603: INFO: Waiting up to 5m0s for pod "pod-b7c31b47-a6a6-417b-a52a-11271c40de6d" in namespace "emptydir-5230" to be "Succeeded or Failed" Jun 25 23:48:44.671: INFO: Pod "pod-b7c31b47-a6a6-417b-a52a-11271c40de6d": Phase="Pending", Reason="", readiness=false. Elapsed: 68.408277ms Jun 25 23:48:46.683: INFO: Pod "pod-b7c31b47-a6a6-417b-a52a-11271c40de6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080461079s Jun 25 23:48:48.846: INFO: Pod "pod-b7c31b47-a6a6-417b-a52a-11271c40de6d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.243028826s Jun 25 23:48:50.849: INFO: Pod "pod-b7c31b47-a6a6-417b-a52a-11271c40de6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.246781673s STEP: Saw pod success Jun 25 23:48:50.850: INFO: Pod "pod-b7c31b47-a6a6-417b-a52a-11271c40de6d" satisfied condition "Succeeded or Failed" Jun 25 23:48:50.852: INFO: Trying to get logs from node latest-worker2 pod pod-b7c31b47-a6a6-417b-a52a-11271c40de6d container test-container: STEP: delete the pod Jun 25 23:48:50.892: INFO: Waiting for pod pod-b7c31b47-a6a6-417b-a52a-11271c40de6d to disappear Jun 25 23:48:50.908: INFO: Pod pod-b7c31b47-a6a6-417b-a52a-11271c40de6d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:48:50.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5230" for this suite. • [SLOW TEST:6.376 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":45,"skipped":662,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:48:50.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-3747 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-3747 STEP: Deleting pre-stop pod Jun 25 23:49:04.032: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:49:04.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-3747" for this suite. • [SLOW TEST:13.157 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":294,"completed":46,"skipped":674,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:49:04.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-b29eb381-3864-4b5a-bd93-4b301adc9e81 STEP: Creating a pod to test consume secrets Jun 25 23:49:04.276: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d1a640b8-7474-47c1-8a64-2f968d6f9a3a" in namespace "projected-5862" to be "Succeeded or Failed" Jun 25 23:49:04.366: INFO: Pod "pod-projected-secrets-d1a640b8-7474-47c1-8a64-2f968d6f9a3a": Phase="Pending", Reason="", readiness=false. Elapsed: 89.210683ms Jun 25 23:49:06.370: INFO: Pod "pod-projected-secrets-d1a640b8-7474-47c1-8a64-2f968d6f9a3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09332241s Jun 25 23:49:08.375: INFO: Pod "pod-projected-secrets-d1a640b8-7474-47c1-8a64-2f968d6f9a3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.09839269s STEP: Saw pod success Jun 25 23:49:08.375: INFO: Pod "pod-projected-secrets-d1a640b8-7474-47c1-8a64-2f968d6f9a3a" satisfied condition "Succeeded or Failed" Jun 25 23:49:08.378: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-d1a640b8-7474-47c1-8a64-2f968d6f9a3a container projected-secret-volume-test: STEP: delete the pod Jun 25 23:49:08.422: INFO: Waiting for pod pod-projected-secrets-d1a640b8-7474-47c1-8a64-2f968d6f9a3a to disappear Jun 25 23:49:08.435: INFO: Pod pod-projected-secrets-d1a640b8-7474-47c1-8a64-2f968d6f9a3a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:49:08.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5862" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":47,"skipped":675,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:49:08.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-4ea6b6d2-24b8-4a87-8b9b-f515774ad17c STEP: Creating a pod to test consume configMaps Jun 25 23:49:08.585: INFO: Waiting up to 5m0s for pod "pod-configmaps-77218539-1f96-4085-be6f-f03797b3a72a" in namespace "configmap-6211" to be "Succeeded or Failed" Jun 25 23:49:08.592: INFO: Pod "pod-configmaps-77218539-1f96-4085-be6f-f03797b3a72a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.475319ms Jun 25 23:49:10.596: INFO: Pod "pod-configmaps-77218539-1f96-4085-be6f-f03797b3a72a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010989973s Jun 25 23:49:12.601: INFO: Pod "pod-configmaps-77218539-1f96-4085-be6f-f03797b3a72a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015805253s STEP: Saw pod success Jun 25 23:49:12.601: INFO: Pod "pod-configmaps-77218539-1f96-4085-be6f-f03797b3a72a" satisfied condition "Succeeded or Failed" Jun 25 23:49:12.604: INFO: Trying to get logs from node latest-worker pod pod-configmaps-77218539-1f96-4085-be6f-f03797b3a72a container configmap-volume-test: STEP: delete the pod Jun 25 23:49:12.638: INFO: Waiting for pod pod-configmaps-77218539-1f96-4085-be6f-f03797b3a72a to disappear Jun 25 23:49:12.651: INFO: Pod pod-configmaps-77218539-1f96-4085-be6f-f03797b3a72a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:49:12.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6211" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":294,"completed":48,"skipped":688,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:49:12.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container Jun 25 23:49:17.302: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7629 pod-service-account-89a16906-27aa-426f-8f50-683aae0acb66 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jun 25 23:49:17.628: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7629 pod-service-account-89a16906-27aa-426f-8f50-683aae0acb66 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jun 25 23:49:17.829: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7629 pod-service-account-89a16906-27aa-426f-8f50-683aae0acb66 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:49:18.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7629" for this suite. • [SLOW TEST:5.467 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":294,"completed":49,"skipped":697,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:49:18.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info Jun 25 23:49:18.204: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config cluster-info' Jun 25 23:49:18.310: INFO: stderr: "" Jun 25 23:49:18.310: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:49:18.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7304" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":294,"completed":50,"skipped":728,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:49:18.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:809 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-6206 STEP: creating service affinity-nodeport in namespace services-6206 STEP: creating replication controller affinity-nodeport in namespace services-6206 I0625 23:49:18.495349 8 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-6206, replica count: 3 I0625 23:49:21.545809 8 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0625 23:49:24.546038 8 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 25 23:49:24.557: INFO: Creating new exec pod Jun 25 23:49:29.608: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6206 execpod-affinitysts2n -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' Jun 25 23:49:29.874: INFO: stderr: "I0625 23:49:29.750701 788 log.go:172] (0xc0000ea370) (0xc00069e1e0) Create stream\nI0625 23:49:29.750768 788 log.go:172] (0xc0000ea370) (0xc00069e1e0) Stream added, broadcasting: 1\nI0625 23:49:29.754183 788 log.go:172] (0xc0000ea370) Reply frame received for 1\nI0625 23:49:29.754222 788 log.go:172] (0xc0000ea370) (0xc00069eaa0) Create stream\nI0625 23:49:29.754236 788 log.go:172] (0xc0000ea370) (0xc00069eaa0) Stream added, broadcasting: 3\nI0625 23:49:29.755176 788 log.go:172] (0xc0000ea370) Reply frame received for 3\nI0625 23:49:29.755211 788 log.go:172] (0xc0000ea370) (0xc000606a00) Create stream\nI0625 23:49:29.755223 788 log.go:172] (0xc0000ea370) (0xc000606a00) Stream added, broadcasting: 5\nI0625 23:49:29.756186 788 log.go:172] (0xc0000ea370) Reply frame received for 5\nI0625 23:49:29.850336 788 log.go:172] (0xc0000ea370) Data frame received for 5\nI0625 23:49:29.850362 788 log.go:172] (0xc000606a00) (5) Data frame handling\nI0625 23:49:29.850370 788 log.go:172] (0xc000606a00) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport 80\nI0625 23:49:29.865912 788 log.go:172] (0xc0000ea370) Data frame received for 5\nI0625 23:49:29.865946 788 log.go:172] (0xc000606a00) (5) Data frame handling\nI0625 23:49:29.865972 788 log.go:172] (0xc000606a00) (5) Data frame sent\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0625 23:49:29.866462 788 log.go:172] (0xc0000ea370) Data frame received for 3\nI0625 23:49:29.866484 788 log.go:172] (0xc00069eaa0) (3) Data frame handling\nI0625 23:49:29.866496 788 log.go:172] (0xc0000ea370) Data frame received for 5\nI0625 23:49:29.866516 788 log.go:172] (0xc000606a00) (5) Data frame handling\nI0625 23:49:29.867812 788 log.go:172] (0xc0000ea370) Data frame received for 1\nI0625 23:49:29.867830 788 log.go:172] (0xc00069e1e0) (1) Data frame handling\nI0625 23:49:29.867853 788 log.go:172] (0xc00069e1e0) (1) Data frame sent\nI0625 23:49:29.867875 788 log.go:172] (0xc0000ea370) (0xc00069e1e0) Stream removed, broadcasting: 1\nI0625 23:49:29.867901 788 log.go:172] (0xc0000ea370) Go away received\nI0625 23:49:29.868223 788 log.go:172] (0xc0000ea370) (0xc00069e1e0) Stream removed, broadcasting: 1\nI0625 23:49:29.868236 788 log.go:172] (0xc0000ea370) (0xc00069eaa0) Stream removed, broadcasting: 3\nI0625 23:49:29.868256 788 log.go:172] (0xc0000ea370) (0xc000606a00) Stream removed, broadcasting: 5\n" Jun 25 23:49:29.874: INFO: stdout: "" Jun 25 23:49:29.875: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6206 execpod-affinitysts2n -- /bin/sh -x -c nc -zv -t -w 2 10.109.184.248 80' Jun 25 23:49:30.108: INFO: stderr: "I0625 23:49:30.019746 809 log.go:172] (0xc0009f40b0) (0xc00039a000) Create stream\nI0625 23:49:30.019837 809 log.go:172] (0xc0009f40b0) (0xc00039a000) Stream added, broadcasting: 1\nI0625 23:49:30.022895 809 log.go:172] (0xc0009f40b0) Reply frame received for 1\nI0625 23:49:30.022947 809 log.go:172] (0xc0009f40b0) (0xc0006cf9a0) Create stream\nI0625 23:49:30.022975 809 log.go:172] (0xc0009f40b0) (0xc0006cf9a0) Stream added, broadcasting: 3\nI0625 23:49:30.023920 809 log.go:172] (0xc0009f40b0) Reply frame received for 3\nI0625 23:49:30.023966 809 log.go:172] (0xc0009f40b0) (0xc00014f7c0) Create stream\nI0625 23:49:30.023985 809 log.go:172] (0xc0009f40b0) (0xc00014f7c0) Stream added, broadcasting: 5\nI0625 23:49:30.024818 809 log.go:172] (0xc0009f40b0) Reply frame received for 5\nI0625 23:49:30.101791 809 log.go:172] (0xc0009f40b0) Data frame received for 5\nI0625 23:49:30.101820 809 log.go:172] (0xc00014f7c0) (5) Data frame handling\nI0625 23:49:30.101828 809 log.go:172] (0xc00014f7c0) (5) Data frame sent\nI0625 23:49:30.101833 809 log.go:172] (0xc0009f40b0) Data frame received for 5\nI0625 23:49:30.101837 809 log.go:172] (0xc00014f7c0) (5) Data frame handling\n+ nc -zv -t -w 2 10.109.184.248 80\nConnection to 10.109.184.248 80 port [tcp/http] succeeded!\nI0625 23:49:30.101853 809 log.go:172] (0xc0009f40b0) Data frame received for 3\nI0625 23:49:30.101858 809 log.go:172] (0xc0006cf9a0) (3) Data frame handling\nI0625 23:49:30.103066 809 log.go:172] (0xc0009f40b0) Data frame received for 1\nI0625 23:49:30.103097 809 log.go:172] (0xc00039a000) (1) Data frame handling\nI0625 23:49:30.103112 809 log.go:172] (0xc00039a000) (1) Data frame sent\nI0625 23:49:30.103127 809 log.go:172] (0xc0009f40b0) (0xc00039a000) Stream removed, broadcasting: 1\nI0625 23:49:30.103147 809 log.go:172] (0xc0009f40b0) Go away received\nI0625 23:49:30.103385 809 log.go:172] (0xc0009f40b0) (0xc00039a000) Stream removed, broadcasting: 1\nI0625 23:49:30.103402 809 log.go:172] (0xc0009f40b0) (0xc0006cf9a0) Stream removed, broadcasting: 3\nI0625 23:49:30.103414 809 log.go:172] (0xc0009f40b0) (0xc00014f7c0) Stream removed, broadcasting: 5\n" Jun 25 23:49:30.108: INFO: stdout: "" Jun 25 23:49:30.108: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6206 execpod-affinitysts2n -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31097' Jun 25 23:49:30.293: INFO: stderr: "I0625 23:49:30.229695 830 log.go:172] (0xc000c0d3f0) (0xc00062c140) Create stream\nI0625 23:49:30.229754 830 log.go:172] (0xc000c0d3f0) (0xc00062c140) Stream added, broadcasting: 1\nI0625 23:49:30.232853 830 log.go:172] (0xc000c0d3f0) Reply frame received for 1\nI0625 23:49:30.232882 830 log.go:172] (0xc000c0d3f0) (0xc0004f48c0) Create stream\nI0625 23:49:30.232892 830 log.go:172] (0xc000c0d3f0) (0xc0004f48c0) Stream added, broadcasting: 3\nI0625 23:49:30.234101 830 log.go:172] (0xc000c0d3f0) Reply frame received for 3\nI0625 23:49:30.234141 830 log.go:172] (0xc000c0d3f0) (0xc0001372c0) Create stream\nI0625 23:49:30.234156 830 log.go:172] (0xc000c0d3f0) (0xc0001372c0) Stream added, broadcasting: 5\nI0625 23:49:30.235130 830 log.go:172] (0xc000c0d3f0) Reply frame received for 5\nI0625 23:49:30.285919 830 log.go:172] (0xc000c0d3f0) Data frame received for 3\nI0625 23:49:30.285963 830 log.go:172] (0xc0004f48c0) (3) Data frame handling\nI0625 23:49:30.286006 830 log.go:172] (0xc000c0d3f0) Data frame received for 5\nI0625 23:49:30.286056 830 log.go:172] (0xc0001372c0) (5) Data frame handling\nI0625 23:49:30.286100 830 log.go:172] (0xc0001372c0) (5) Data frame sent\nI0625 23:49:30.286129 830 log.go:172] (0xc000c0d3f0) Data frame received for 5\nI0625 23:49:30.286149 830 log.go:172] (0xc0001372c0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31097\nConnection to 172.17.0.13 31097 port [tcp/31097] succeeded!\nI0625 23:49:30.287585 830 log.go:172] (0xc000c0d3f0) Data frame received for 1\nI0625 23:49:30.287623 830 log.go:172] (0xc00062c140) (1) Data frame handling\nI0625 23:49:30.287645 830 log.go:172] (0xc00062c140) (1) Data frame sent\nI0625 23:49:30.287667 830 log.go:172] (0xc000c0d3f0) (0xc00062c140) Stream removed, broadcasting: 1\nI0625 23:49:30.287703 830 log.go:172] (0xc000c0d3f0) Go away received\nI0625 23:49:30.287992 830 log.go:172] (0xc000c0d3f0) (0xc00062c140) Stream removed, broadcasting: 1\nI0625 23:49:30.288007 830 log.go:172] (0xc000c0d3f0) (0xc0004f48c0) Stream removed, broadcasting: 3\nI0625 23:49:30.288013 830 log.go:172] (0xc000c0d3f0) (0xc0001372c0) Stream removed, broadcasting: 5\n" Jun 25 23:49:30.293: INFO: stdout: "" Jun 25 23:49:30.294: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6206 execpod-affinitysts2n -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31097' Jun 25 23:49:30.497: INFO: stderr: "I0625 23:49:30.423910 850 log.go:172] (0xc000b0f760) (0xc000b36640) Create stream\nI0625 23:49:30.423966 850 log.go:172] (0xc000b0f760) (0xc000b36640) Stream added, broadcasting: 1\nI0625 23:49:30.429730 850 log.go:172] (0xc000b0f760) Reply frame received for 1\nI0625 23:49:30.429766 850 log.go:172] (0xc000b0f760) (0xc000836be0) Create stream\nI0625 23:49:30.429783 850 log.go:172] (0xc000b0f760) (0xc000836be0) Stream added, broadcasting: 3\nI0625 23:49:30.430727 850 log.go:172] (0xc000b0f760) Reply frame received for 3\nI0625 23:49:30.430773 850 log.go:172] (0xc000b0f760) (0xc0002c2640) Create stream\nI0625 23:49:30.430786 850 log.go:172] (0xc000b0f760) (0xc0002c2640) Stream added, broadcasting: 5\nI0625 23:49:30.431621 850 log.go:172] (0xc000b0f760) Reply frame received for 5\nI0625 23:49:30.488777 850 log.go:172] (0xc000b0f760) Data frame received for 3\nI0625 23:49:30.488819 850 log.go:172] (0xc000836be0) (3) Data frame handling\nI0625 23:49:30.488844 850 log.go:172] (0xc000b0f760) Data frame received for 5\nI0625 23:49:30.488855 850 log.go:172] (0xc0002c2640) (5) Data frame handling\nI0625 23:49:30.488872 850 log.go:172] (0xc0002c2640) (5) Data frame sent\nI0625 23:49:30.488884 850 log.go:172] (0xc000b0f760) Data frame received for 5\nI0625 23:49:30.488894 850 log.go:172] (0xc0002c2640) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 31097\nConnection to 172.17.0.12 31097 port [tcp/31097] succeeded!\nI0625 23:49:30.491303 850 log.go:172] (0xc000b0f760) Data frame received for 1\nI0625 23:49:30.491346 850 log.go:172] (0xc000b36640) (1) Data frame handling\nI0625 23:49:30.491383 850 log.go:172] (0xc000b36640) (1) Data frame sent\nI0625 23:49:30.491487 850 log.go:172] (0xc000b0f760) (0xc000b36640) Stream removed, broadcasting: 1\nI0625 23:49:30.491557 850 log.go:172] (0xc000b0f760) Go away received\nI0625 23:49:30.491888 850 log.go:172] (0xc000b0f760) (0xc000b36640) Stream removed, broadcasting: 1\nI0625 23:49:30.491925 850 log.go:172] (0xc000b0f760) (0xc000836be0) Stream removed, broadcasting: 3\nI0625 23:49:30.491940 850 log.go:172] (0xc000b0f760) (0xc0002c2640) Stream removed, broadcasting: 5\n" Jun 25 23:49:30.497: INFO: stdout: "" Jun 25 23:49:30.497: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6206 execpod-affinitysts2n -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31097/ ; done' Jun 25 23:49:30.916: INFO: stderr: "I0625 23:49:30.679228 872 log.go:172] (0xc000c27130) (0xc000baa460) Create stream\nI0625 23:49:30.679274 872 log.go:172] (0xc000c27130) (0xc000baa460) Stream added, broadcasting: 1\nI0625 23:49:30.685643 872 log.go:172] (0xc000c27130) Reply frame received for 1\nI0625 23:49:30.685671 872 log.go:172] (0xc000c27130) (0xc00064ebe0) Create stream\nI0625 23:49:30.685678 872 log.go:172] (0xc000c27130) (0xc00064ebe0) Stream added, broadcasting: 3\nI0625 23:49:30.686503 872 log.go:172] (0xc000c27130) Reply frame received for 3\nI0625 23:49:30.686533 872 log.go:172] (0xc000c27130) (0xc0003a6000) Create stream\nI0625 23:49:30.686542 872 log.go:172] (0xc000c27130) (0xc0003a6000) Stream added, broadcasting: 5\nI0625 23:49:30.687589 872 log.go:172] (0xc000c27130) Reply frame received for 5\nI0625 23:49:30.764363 872 log.go:172] (0xc000c27130) Data frame received for 5\nI0625 23:49:30.764403 872 log.go:172] (0xc0003a6000) (5) Data frame handling\nI0625 23:49:30.764438 872 log.go:172] (0xc0003a6000) (5) Data frame sent\n+ seq 0 15\nI0625 23:49:30.775021 872 log.go:172] (0xc000c27130) Data frame received for 3\nI0625 23:49:30.775058 872 log.go:172] (0xc00064ebe0) (3) Data frame handling\nI0625 23:49:30.775073 872 log.go:172] (0xc00064ebe0) (3) Data frame sent\nI0625 23:49:30.775103 872 log.go:172] (0xc000c27130) Data frame received for 5\nI0625 23:49:30.775115 872 log.go:172] (0xc0003a6000) (5) Data frame handling\nI0625 23:49:30.775132 872 log.go:172] (0xc0003a6000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31097/\nI0625 23:49:30.824250 872 log.go:172] (0xc000c27130) Data frame received for 3\nI0625 23:49:30.824283 872 log.go:172] (0xc00064ebe0) (3) Data frame handling\nI0625 23:49:30.824492 872 log.go:172] (0xc00064ebe0) (3) Data frame sent\nI0625 23:49:30.824989 872 log.go:172] (0xc000c27130) Data frame received for 5\nI0625 23:49:30.825007 872 log.go:172] (0xc0003a6000) (5) Data frame handling\nI0625 23:49:30.825029 872 log.go:172] (0xc0003a6000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31097/\nI0625 23:49:30.825301 872 log.go:172] (0xc000c27130) Data frame received for 3\nI0625 23:49:30.825572 872 log.go:172] (0xc00064ebe0) (3) Data frame handling\nI0625 23:49:30.825592 872 log.go:172] (0xc00064ebe0) (3) Data frame sent\nI0625 23:49:30.832825 872 log.go:172] (0xc000c27130) Data frame received for 3\nI0625 23:49:30.832852 872 log.go:172] (0xc00064ebe0) (3) Data frame handling\nI0625 23:49:30.832871 872 log.go:172] (0xc00064ebe0) (3) Data frame sent\nI0625 23:49:30.834481 872 log.go:172] (0xc000c27130) Data frame received for 5\nI0625 23:49:30.834512 872 log.go:172] (0xc0003a6000) (5) Data frame handling\nI0625 23:49:30.834528 872 log.go:172] (0xc0003a6000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31097/\nI0625 23:49:30.834540 872 log.go:172] (0xc000c27130) Data frame received for 3\nI0625 23:49:30.834546 872 log.go:172] (0xc00064ebe0) (3) Data frame handling\nI0625 23:49:30.834560 872 log.go:172] (0xc00064ebe0) (3) Data frame sent\nI0625 23:49:30.839575 872 log.go:172] (0xc000c27130) Data frame received for 3\nI0625 23:49:30.839601 872 log.go:172] (0xc00064ebe0) (3) Data frame handling\nI0625 23:49:30.839621 872 log.go:172] (0xc00064ebe0) (3) Data frame sent\nI0625 23:49:30.840082 872 log.go:172] (0xc000c27130) Data frame received for 3\nI0625 23:49:30.840110 872 log.go:172] (0xc00064ebe0) (3) Data frame handling\nI0625 23:49:30.840117 872 log.go:172] (0xc00064ebe0) (3) Data frame sent\nI0625 23:49:30.840127 872 log.go:172] (0xc000c27130) Data frame received for 5\nI0625 23:49:30.840132 872 log.go:172] (0xc0003a6000) (5) Data frame handling\nI0625 23:49:30.840137 872 log.go:172] (0xc0003a6000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31097/\nI0625 23:49:30.847228 872 log.go:172] (0xc000c27130) Data frame received for 3\nI0625 23:49:30.847254 872 log.go:172] (0xc00064ebe0) (3) Data frame handling\nI0625 23:49:30.847273 872 log.go:172] (0xc00064ebe0) (3) Data frame sent\nI0625 23:49:30.847676 872 log.go:172] (0xc000c27130) Data frame received for 3\nI0625 23:49:30.847701 872 log.go:172] (0xc00064ebe0) (3) Data frame handling\nI0625 23:49:30.847712 872 log.go:172] (0xc00064ebe0) (3) Data frame sent\nI0625 23:49:30.847730 872 log.go:172] (0xc000c27130) Data frame received for 5\nI0625 23:49:30.847737 872 log.go:172] (0xc0003a6000) (5) Data frame handling\nI0625 23:49:30.847744 872 log.go:172] (0xc0003a6000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31097/\nI0625 23:49:30.851450 872 log.go:172] (0xc000c27130) Data frame received for 3\nI0625 23:49:30.851477 872 log.go:172] (0xc00064ebe0) (3) Data frame handling\nI0625 23:49:30.851495 872 log.go:172] (0xc00064ebe0) (3) Data frame sent\nI0625 23:49:30.853058 872 log.go:172] (0xc000c27130) Data frame received for 5\nI0625 23:49:30.853077 872 log.go:172] (0xc0003a6000) (5) Data frame handling\nI0625 23:49:30.853088 872 log.go:172] (0xc0003a6000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31097/\nI0625 23:49:30.853101 872 log.go:172] (0xc000c27130) Data frame received for 3\nI0625 23:49:30.853242 872 log.go:172] (0xc00064ebe0) (3) Data frame handling\nI0625 23:49:30.853259 872 log.go:172] (0xc00064ebe0) (3) Data frame sent\nI0625 23:49:30.858909 872 log.go:172] (0xc000c27130) Data frame received for 3\nI0625 23:49:30.858927 872 log.go:172] (0xc00064ebe0) (3) Data frame handling\nI0625 23:49:30.858934 872 log.go:172] (0xc00064ebe0) (3) Data frame sent\nI0625 23:49:30.859646 872 log.go:172] (0xc000c27130) Data frame received for 5\nI0625 23:49:30.859663 872 log.go:172] (0xc0003a6000) (5) Data frame handling\nI0625 23:49:30.859676 872 log.go:172] (0xc0003a6000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31097/\nI0625 23:49:30.859805 872 log.go:172] (0xc000c27130) Data frame received for 3\nI0625 23:49:30.859818 872 log.go:172] (0xc00064ebe0) (3) Data frame handling\nI0625 23:49:30.859828 872 log.go:172] (0xc00064ebe0) (3) Data frame sent\nI0625 23:49:30.866268 872 log.go:172] (0xc000c27130) Data frame received for 3\nI0625 23:49:30.866289 872 log.go:172] (0xc00064ebe0) (3) Data frame handling\nI0625 23:49:30.866317 872 log.go:172] (0xc00064ebe0) (3) Data frame sent\nI0625 23:49:30.866920 872 log.go:172] (0xc000c27130) Data frame received for 3\nI0625 23:49:30.866940 872 log.go:172] (0xc00064ebe0) (3) Data frame handling\nI0625 23:49:30.866947 872 log.go:172] (0xc00064ebe0) (3) Data frame sent\nI0625 23:49:30.866957 872 log.go:172] (0xc000c27130) Data frame received for 5\nI0625 23:49:30.866963 872 log.go:172] (0xc0003a6000) (5) Data frame handling\nI0625 23:49:30.866968 872 log.go:172] (0xc0003a6000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31097/\nI0625 23:49:30.870041 872 log.go:172] (0xc000c27130) Data frame received for 3\nI0625 23:49:30.870067 872 log.go:172] (0xc00064ebe0) (3) Data frame handling\nI0625 23:49:30.870085 872 log.go:172] (0xc00064ebe0) (3) Data frame sent\nI0625 23:49:30.870634 872 log.go:172] (0xc000c27130) Data frame received for 3\nI0625 23:49:30.870651 872 log.go:172] (0xc00064ebe0) (3) Data frame handling\nI0625 23:49:30.870659 872 log.go:172] (0xc00064ebe0) (3) Data frame sent\nI0625 23:49:30.870670 872 log.go:172] (0xc000c27130) Data frame received for 5\nI0625 23:49:30.870676 872 log.go:172] (0xc0003a6000) (5) Data frame handling\nI0625 23:49:30.870685 872 log.go:172] (0xc0003a6000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31097/\nI0625 23:49:30.875442 872 log.go:172] (0xc000c27130) Data frame received for 3\nI0625 23:49:30.875486 872 log.go:172] (0xc00064ebe0) (3) Data frame handling\nI0625 23:49:30.875504 872 log.go:172] (0xc00064ebe0) (3) Data frame sent\nI0625 23:49:30.875517 872 log.go:172] (0xc000c27130) Data frame received for 3\nI0625 23:49:30.875530 872 log.go:172] (0xc00064ebe0) (3) Data frame handling\nI0625 23:49:30.875560 872 log.go:172] (0xc000c27130) Data frame received for 5\nI0625 23:49:30.875580 872 log.go:172] (0xc0003a6000) (5) Data frame handling\nI0625 23:49:30.875588 872 log.go:172] (0xc0003a6000) (5) Data frame sent\nI0625 23:49:30.875595 872 log.go:172] (0xc000c27130) Data frame received for 5\nI0625 23:49:30.875602 872 log.go:172] (0xc0003a6000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31097/\nI0625 23:49:30.875616 872 log.go:172] (0xc0003a6000) (5) Data frame sent\nI0625 23:49:30.875627 872 log.go:172] (0xc00064ebe0) (3) Data frame sent\nI0625 23:49:30.880538 872 log.go:172] (0xc000c27130) Data frame received for 5\nI0625 23:49:30.880579 872 log.go:172] (0xc0003a6000) (5) Data frame handling\nI0625 23:49:30.880591 872 log.go:172] (0xc0003a6000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31097/\nI0625 23:49:30.880607 872 log.go:172] (0xc000c27130) Data frame received for 3\nI0625 23:49:30.880615 872 log.go:172] (0xc00064ebe0) (3) Data frame handling\nI0625 23:49:30.880624 872 log.go:172] (0xc00064ebe0) (3) Data frame sent\nI0625 23:49:30.880633 872 log.go:172] (0xc000c27130) Data frame received for 3\nI0625 23:49:30.880640 872 log.go:172] (0xc00064ebe0) (3) Data frame handling\nI0625 23:49:30.880656 872 log.go:172] (0xc00064ebe0) (3) Data frame sent\nI0625 23:49:30.885820 872 log.go:172] (0xc000c27130) Data frame received for 3\nI0625 23:49:30.885843 872 log.go:172] (0xc00064ebe0) (3) Data frame handling\nI0625 23:49:30.885855 872 log.go:172] (0xc00064ebe0) (3) Data frame sent\nI0625 23:49:30.886140 872 log.go:172] (0xc000c27130) Data frame received for 5\nI0625 23:49:30.886270 872 log.go:172] (0xc0003a6000) (5) Data frame handling\nI0625 23:49:30.886290 872 log.go:172] (0xc0003a6000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31097/\nI0625 23:49:30.886319 872 log.go:172] (0xc000c27130) Data frame received for 3\nI0625 23:49:30.886340 872 log.go:172] (0xc00064ebe0) (3) Data frame handling\nI0625 23:49:30.886362 872 log.go:172] (0xc00064ebe0) (3) Data frame sent\nI0625 23:49:30.890330 872 log.go:172] (0xc000c27130) Data frame received for 3\nI0625 23:49:30.890363 872 log.go:172] (0xc00064ebe0) (3) Data frame handling\nI0625 23:49:30.890397 872 log.go:172] (0xc00064ebe0) (3) Data frame sent\nI0625 23:49:30.891119 872 log.go:172] (0xc000c27130) Data frame received for 5\nI0625 23:49:30.891145 872 log.go:172] (0xc0003a6000) (5) Data frame handling\nI0625 23:49:30.891153 872 log.go:172] (0xc0003a6000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31097/\nI0625 23:49:30.891166 872 log.go:172] (0xc000c27130) Data frame received for 3\nI0625 23:49:30.891173 872 log.go:172] (0xc00064ebe0) (3) Data frame handling\nI0625 23:49:30.891179 872 log.go:172] (0xc00064ebe0) (3) Data frame sent\nI0625 23:49:30.896261 872 log.go:172] (0xc000c27130) Data frame received for 3\nI0625 23:49:30.896296 872 log.go:172] (0xc00064ebe0) (3) Data frame handling\nI0625 23:49:30.896350 872 log.go:172] (0xc00064ebe0) (3) Data frame sent\nI0625 23:49:30.896655 872 log.go:172] (0xc000c27130) Data frame received for 3\nI0625 23:49:30.896684 872 log.go:172] (0xc00064ebe0) (3) Data frame handling\nI0625 23:49:30.896702 872 log.go:172] (0xc00064ebe0) (3) Data frame sent\nI0625 23:49:30.896737 872 log.go:172] (0xc000c27130) Data frame received for 5\nI0625 23:49:30.896748 872 log.go:172] (0xc0003a6000) (5) Data frame handling\nI0625 23:49:30.896762 872 log.go:172] (0xc0003a6000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31097/\nI0625 23:49:30.901351 872 log.go:172] (0xc000c27130) Data frame received for 3\nI0625 23:49:30.901387 872 log.go:172] (0xc00064ebe0) (3) Data frame handling\nI0625 23:49:30.901422 872 log.go:172] (0xc00064ebe0) (3) Data frame sent\nI0625 23:49:30.901834 872 log.go:172] (0xc000c27130) Data frame received for 5\nI0625 23:49:30.901853 872 log.go:172] (0xc0003a6000) (5) Data frame handling\nI0625 23:49:30.901862 872 log.go:172] (0xc0003a6000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31097/\nI0625 23:49:30.901873 872 log.go:172] (0xc000c27130) Data frame received for 3\nI0625 23:49:30.901879 872 log.go:172] (0xc00064ebe0) (3) Data frame handling\nI0625 23:49:30.901886 872 log.go:172] (0xc00064ebe0) (3) Data frame sent\nI0625 23:49:30.905482 872 log.go:172] (0xc000c27130) Data frame received for 3\nI0625 23:49:30.905516 872 log.go:172] (0xc00064ebe0) (3) Data frame handling\nI0625 23:49:30.905581 872 log.go:172] (0xc00064ebe0) (3) Data frame sent\nI0625 23:49:30.906500 872 log.go:172] (0xc000c27130) Data frame received for 3\nI0625 23:49:30.906523 872 log.go:172] (0xc00064ebe0) (3) Data frame handling\nI0625 23:49:30.906536 872 log.go:172] (0xc00064ebe0) (3) Data frame sent\nI0625 23:49:30.906570 872 log.go:172] (0xc000c27130) Data frame received for 5\nI0625 23:49:30.906585 872 log.go:172] (0xc0003a6000) (5) Data frame handling\nI0625 23:49:30.906599 872 log.go:172] (0xc0003a6000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31097/\nI0625 23:49:30.909604 872 log.go:172] (0xc000c27130) Data frame received for 3\nI0625 23:49:30.909631 872 log.go:172] (0xc00064ebe0) (3) Data frame handling\nI0625 23:49:30.909653 872 log.go:172] (0xc00064ebe0) (3) Data frame sent\nI0625 23:49:30.910163 872 log.go:172] (0xc000c27130) Data frame received for 3\nI0625 23:49:30.910187 872 log.go:172] (0xc00064ebe0) (3) Data frame handling\nI0625 23:49:30.910218 872 log.go:172] (0xc000c27130) Data frame received for 5\nI0625 23:49:30.910228 872 log.go:172] (0xc0003a6000) (5) Data frame handling\nI0625 23:49:30.911693 872 log.go:172] (0xc000c27130) Data frame received for 1\nI0625 23:49:30.911708 872 log.go:172] (0xc000baa460) (1) Data frame handling\nI0625 23:49:30.911723 872 log.go:172] (0xc000baa460) (1) Data frame sent\nI0625 23:49:30.911742 872 log.go:172] (0xc000c27130) (0xc000baa460) Stream removed, broadcasting: 1\nI0625 23:49:30.911777 872 log.go:172] (0xc000c27130) Go away received\nI0625 23:49:30.912040 872 log.go:172] (0xc000c27130) (0xc000baa460) Stream removed, broadcasting: 1\nI0625 23:49:30.912072 872 log.go:172] (0xc000c27130) (0xc00064ebe0) Stream removed, broadcasting: 3\nI0625 23:49:30.912087 872 log.go:172] (0xc000c27130) (0xc0003a6000) Stream removed, broadcasting: 5\n" Jun 25 23:49:30.917: INFO: stdout: "\naffinity-nodeport-8glg5\naffinity-nodeport-8glg5\naffinity-nodeport-8glg5\naffinity-nodeport-8glg5\naffinity-nodeport-8glg5\naffinity-nodeport-8glg5\naffinity-nodeport-8glg5\naffinity-nodeport-8glg5\naffinity-nodeport-8glg5\naffinity-nodeport-8glg5\naffinity-nodeport-8glg5\naffinity-nodeport-8glg5\naffinity-nodeport-8glg5\naffinity-nodeport-8glg5\naffinity-nodeport-8glg5\naffinity-nodeport-8glg5" Jun 25 23:49:30.917: INFO: Received response from host: Jun 25 23:49:30.917: INFO: Received response from host: affinity-nodeport-8glg5 Jun 25 23:49:30.917: INFO: Received response from host: affinity-nodeport-8glg5 Jun 25 23:49:30.917: INFO: Received response from host: affinity-nodeport-8glg5 Jun 25 23:49:30.917: INFO: Received response from host: affinity-nodeport-8glg5 Jun 25 23:49:30.917: INFO: Received response from host: affinity-nodeport-8glg5 Jun 25 23:49:30.917: INFO: Received response from host: affinity-nodeport-8glg5 Jun 25 23:49:30.917: INFO: Received response from host: affinity-nodeport-8glg5 Jun 25 23:49:30.917: INFO: Received response from host: affinity-nodeport-8glg5 Jun 25 23:49:30.917: INFO: Received response from host: affinity-nodeport-8glg5 Jun 25 23:49:30.917: INFO: Received response from host: affinity-nodeport-8glg5 Jun 25 23:49:30.917: INFO: Received response from host: affinity-nodeport-8glg5 Jun 25 23:49:30.917: INFO: Received response from host: affinity-nodeport-8glg5 Jun 25 23:49:30.917: INFO: Received response from host: affinity-nodeport-8glg5 Jun 25 23:49:30.917: INFO: Received response from host: affinity-nodeport-8glg5 Jun 25 23:49:30.917: INFO: Received response from host: affinity-nodeport-8glg5 Jun 25 23:49:30.917: INFO: Received response from host: affinity-nodeport-8glg5 Jun 25 23:49:30.917: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-6206, will wait for the garbage collector to delete the pods Jun 25 23:49:31.312: INFO: Deleting ReplicationController affinity-nodeport took: 30.333638ms Jun 25 23:49:31.813: INFO: Terminating ReplicationController affinity-nodeport pods took: 500.465667ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:49:45.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6206" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:813 • [SLOW TEST:27.094 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":294,"completed":51,"skipped":741,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:49:45.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 25 23:49:45.576: INFO: Creating deployment "test-recreate-deployment" Jun 25 23:49:45.597: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jun 25 23:49:45.672: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jun 25 23:49:47.679: INFO: Waiting deployment "test-recreate-deployment" to complete Jun 25 23:49:47.682: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728725785, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728725785, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728725785, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728725785, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6d65b9f6d8\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 25 23:49:49.687: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jun 25 23:49:49.695: INFO: Updating deployment test-recreate-deployment Jun 25 23:49:49.695: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Jun 25 23:49:50.276: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-1677 /apis/apps/v1/namespaces/deployment-1677/deployments/test-recreate-deployment c2d97a96-bf2b-4959-9fec-8482aa6cbb8b 15905359 2 2020-06-25 23:49:45 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-06-25 23:49:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-06-25 23:49:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004ec27f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-06-25 23:49:49 +0000 UTC,LastTransitionTime:2020-06-25 23:49:49 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-06-25 23:49:50 +0000 UTC,LastTransitionTime:2020-06-25 23:49:45 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Jun 25 23:49:50.279: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7 deployment-1677 /apis/apps/v1/namespaces/deployment-1677/replicasets/test-recreate-deployment-d5667d9c7 48a5c9c1-26a6-48a3-a338-ae1e9e37f369 15905355 1 2020-06-25 23:49:49 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment c2d97a96-bf2b-4959-9fec-8482aa6cbb8b 0xc004ec3250 0xc004ec3251}] [] [{kube-controller-manager Update apps/v1 2020-06-25 23:49:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c2d97a96-bf2b-4959-9fec-8482aa6cbb8b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004ec32c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 25 23:49:50.279: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jun 25 23:49:50.279: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6d65b9f6d8 deployment-1677 /apis/apps/v1/namespaces/deployment-1677/replicasets/test-recreate-deployment-6d65b9f6d8 5128c516-19e0-445e-b13a-13185172bb93 15905346 2 2020-06-25 23:49:45 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment c2d97a96-bf2b-4959-9fec-8482aa6cbb8b 0xc004ec3147 0xc004ec3148}] [] [{kube-controller-manager Update apps/v1 2020-06-25 23:49:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c2d97a96-bf2b-4959-9fec-8482aa6cbb8b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6d65b9f6d8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004ec31e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 25 23:49:50.336: INFO: Pod "test-recreate-deployment-d5667d9c7-v2jtc" is not available: &Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-v2jtc test-recreate-deployment-d5667d9c7- deployment-1677 /api/v1/namespaces/deployment-1677/pods/test-recreate-deployment-d5667d9c7-v2jtc d540ebae-c8ac-450b-b082-f80e9be95e56 15905360 0 2020-06-25 23:49:49 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 48a5c9c1-26a6-48a3-a338-ae1e9e37f369 0xc004ec37b0 0xc004ec37b1}] [] [{kube-controller-manager Update v1 2020-06-25 23:49:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48a5c9c1-26a6-48a3-a338-ae1e9e37f369\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-25 23:49:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-brtqj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-brtqj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-brtqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-25 23:49:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-25 23:49:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-25 23:49:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-25 23:49:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-06-25 23:49:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:49:50.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1677" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":294,"completed":52,"skipped":748,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:49:50.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-d3c24756-b3d8-45cc-96e1-d842ab9aff85 STEP: Creating a pod to test consume secrets Jun 25 23:49:50.490: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5b3d5bd1-ccac-4fcc-bda1-935b68e912d6" in namespace "projected-9530" to be "Succeeded or Failed" Jun 25 23:49:50.534: INFO: Pod "pod-projected-secrets-5b3d5bd1-ccac-4fcc-bda1-935b68e912d6": Phase="Pending", Reason="", readiness=false. Elapsed: 44.336784ms Jun 25 23:49:52.551: INFO: Pod "pod-projected-secrets-5b3d5bd1-ccac-4fcc-bda1-935b68e912d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061496847s Jun 25 23:49:54.575: INFO: Pod "pod-projected-secrets-5b3d5bd1-ccac-4fcc-bda1-935b68e912d6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085508767s Jun 25 23:49:56.580: INFO: Pod "pod-projected-secrets-5b3d5bd1-ccac-4fcc-bda1-935b68e912d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.089778267s STEP: Saw pod success Jun 25 23:49:56.580: INFO: Pod "pod-projected-secrets-5b3d5bd1-ccac-4fcc-bda1-935b68e912d6" satisfied condition "Succeeded or Failed" Jun 25 23:49:56.582: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-5b3d5bd1-ccac-4fcc-bda1-935b68e912d6 container projected-secret-volume-test: STEP: delete the pod Jun 25 23:49:56.614: INFO: Waiting for pod pod-projected-secrets-5b3d5bd1-ccac-4fcc-bda1-935b68e912d6 to disappear Jun 25 23:49:56.627: INFO: Pod pod-projected-secrets-5b3d5bd1-ccac-4fcc-bda1-935b68e912d6 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:49:56.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9530" for this suite. • [SLOW TEST:6.234 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":53,"skipped":758,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:49:56.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:809 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-9915 STEP: creating service affinity-clusterip in namespace services-9915 STEP: creating replication controller affinity-clusterip in namespace services-9915 I0625 23:49:56.751345 8 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-9915, replica count: 3 I0625 23:49:59.801769 8 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0625 23:50:02.802043 8 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 25 23:50:02.808: INFO: Creating new exec pod Jun 25 23:50:07.826: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9915 execpod-affinityz5ht7 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jun 25 23:50:08.055: INFO: stderr: "I0625 23:50:07.976477 891 log.go:172] (0xc000024bb0) (0xc0009acc80) Create stream\nI0625 23:50:07.976526 891 log.go:172] (0xc000024bb0) (0xc0009acc80) Stream added, broadcasting: 1\nI0625 23:50:07.978625 891 log.go:172] (0xc000024bb0) Reply frame received for 1\nI0625 23:50:07.978679 891 log.go:172] (0xc000024bb0) (0xc0009ad720) Create stream\nI0625 23:50:07.978695 891 log.go:172] (0xc000024bb0) (0xc0009ad720) Stream added, broadcasting: 3\nI0625 23:50:07.979853 891 log.go:172] (0xc000024bb0) Reply frame received for 3\nI0625 23:50:07.979887 891 log.go:172] (0xc000024bb0) (0xc0009a2c80) Create stream\nI0625 23:50:07.979897 891 log.go:172] (0xc000024bb0) (0xc0009a2c80) Stream added, broadcasting: 5\nI0625 23:50:07.981063 891 log.go:172] (0xc000024bb0) Reply frame received for 5\nI0625 23:50:08.045930 891 log.go:172] (0xc000024bb0) Data frame received for 5\nI0625 23:50:08.045962 891 log.go:172] (0xc0009a2c80) (5) Data frame handling\nI0625 23:50:08.045981 891 log.go:172] (0xc0009a2c80) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip 80\nI0625 23:50:08.046396 891 log.go:172] (0xc000024bb0) Data frame received for 5\nI0625 23:50:08.046414 891 log.go:172] (0xc0009a2c80) (5) Data frame handling\nI0625 23:50:08.046424 891 log.go:172] (0xc0009a2c80) (5) Data frame sent\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0625 23:50:08.046495 891 log.go:172] (0xc000024bb0) Data frame received for 5\nI0625 23:50:08.046504 891 log.go:172] (0xc0009a2c80) (5) Data frame handling\nI0625 23:50:08.046849 891 log.go:172] (0xc000024bb0) Data frame received for 3\nI0625 23:50:08.046876 891 log.go:172] (0xc0009ad720) (3) Data frame handling\nI0625 23:50:08.048582 891 log.go:172] (0xc000024bb0) Data frame received for 1\nI0625 23:50:08.048622 891 log.go:172] (0xc0009acc80) (1) Data frame handling\nI0625 23:50:08.048645 891 log.go:172] (0xc0009acc80) (1) Data frame sent\nI0625 23:50:08.048666 891 log.go:172] (0xc000024bb0) (0xc0009acc80) Stream removed, broadcasting: 1\nI0625 23:50:08.048698 891 log.go:172] (0xc000024bb0) Go away received\nI0625 23:50:08.049414 891 log.go:172] (0xc000024bb0) (0xc0009acc80) Stream removed, broadcasting: 1\nI0625 23:50:08.049439 891 log.go:172] (0xc000024bb0) (0xc0009ad720) Stream removed, broadcasting: 3\nI0625 23:50:08.049449 891 log.go:172] (0xc000024bb0) (0xc0009a2c80) Stream removed, broadcasting: 5\n" Jun 25 23:50:08.055: INFO: stdout: "" Jun 25 23:50:08.056: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9915 execpod-affinityz5ht7 -- /bin/sh -x -c nc -zv -t -w 2 10.103.148.9 80' Jun 25 23:50:08.268: INFO: stderr: "I0625 23:50:08.197777 912 log.go:172] (0xc000910790) (0xc0005f65a0) Create stream\nI0625 23:50:08.197869 912 log.go:172] (0xc000910790) (0xc0005f65a0) Stream added, broadcasting: 1\nI0625 23:50:08.201060 912 log.go:172] (0xc000910790) Reply frame received for 1\nI0625 23:50:08.201337 912 log.go:172] (0xc000910790) (0xc000534320) Create stream\nI0625 23:50:08.201374 912 log.go:172] (0xc000910790) (0xc000534320) Stream added, broadcasting: 3\nI0625 23:50:08.202674 912 log.go:172] (0xc000910790) Reply frame received for 3\nI0625 23:50:08.202744 912 log.go:172] (0xc000910790) (0xc0005f6aa0) Create stream\nI0625 23:50:08.202793 912 log.go:172] (0xc000910790) (0xc0005f6aa0) Stream added, broadcasting: 5\nI0625 23:50:08.203949 912 log.go:172] (0xc000910790) Reply frame received for 5\nI0625 23:50:08.260339 912 log.go:172] (0xc000910790) Data frame received for 5\nI0625 23:50:08.260469 912 log.go:172] (0xc0005f6aa0) (5) Data frame handling\n+ nc -zv -t -w 2 10.103.148.9 80\nConnection to 10.103.148.9 80 port [tcp/http] succeeded!\nI0625 23:50:08.260499 912 log.go:172] (0xc000910790) Data frame received for 3\nI0625 23:50:08.260534 912 log.go:172] (0xc000534320) (3) Data frame handling\nI0625 23:50:08.260554 912 log.go:172] (0xc0005f6aa0) (5) Data frame sent\nI0625 23:50:08.260573 912 log.go:172] (0xc000910790) Data frame received for 5\nI0625 23:50:08.260584 912 log.go:172] (0xc0005f6aa0) (5) Data frame handling\nI0625 23:50:08.261858 912 log.go:172] (0xc000910790) Data frame received for 1\nI0625 23:50:08.261894 912 log.go:172] (0xc0005f65a0) (1) Data frame handling\nI0625 23:50:08.261913 912 log.go:172] (0xc0005f65a0) (1) Data frame sent\nI0625 23:50:08.261927 912 log.go:172] (0xc000910790) (0xc0005f65a0) Stream removed, broadcasting: 1\nI0625 23:50:08.261943 912 log.go:172] (0xc000910790) Go away received\nI0625 23:50:08.262501 912 log.go:172] (0xc000910790) (0xc0005f65a0) Stream removed, broadcasting: 1\nI0625 23:50:08.262538 912 log.go:172] (0xc000910790) (0xc000534320) Stream removed, broadcasting: 3\nI0625 23:50:08.262560 912 log.go:172] (0xc000910790) (0xc0005f6aa0) Stream removed, broadcasting: 5\n" Jun 25 23:50:08.268: INFO: stdout: "" Jun 25 23:50:08.268: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9915 execpod-affinityz5ht7 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.103.148.9:80/ ; done' Jun 25 23:50:08.569: INFO: stderr: "I0625 23:50:08.410927 933 log.go:172] (0xc0009ef080) (0xc0006285a0) Create stream\nI0625 23:50:08.411017 933 log.go:172] (0xc0009ef080) (0xc0006285a0) Stream added, broadcasting: 1\nI0625 23:50:08.413693 933 log.go:172] (0xc0009ef080) Reply frame received for 1\nI0625 23:50:08.413731 933 log.go:172] (0xc0009ef080) (0xc000aa0640) Create stream\nI0625 23:50:08.413744 933 log.go:172] (0xc0009ef080) (0xc000aa0640) Stream added, broadcasting: 3\nI0625 23:50:08.414695 933 log.go:172] (0xc0009ef080) Reply frame received for 3\nI0625 23:50:08.414713 933 log.go:172] (0xc0009ef080) (0xc000628c80) Create stream\nI0625 23:50:08.414719 933 log.go:172] (0xc0009ef080) (0xc000628c80) Stream added, broadcasting: 5\nI0625 23:50:08.415766 933 log.go:172] (0xc0009ef080) Reply frame received for 5\nI0625 23:50:08.496522 933 log.go:172] (0xc0009ef080) Data frame received for 3\nI0625 23:50:08.496547 933 log.go:172] (0xc000aa0640) (3) Data frame handling\nI0625 23:50:08.496554 933 log.go:172] (0xc000aa0640) (3) Data frame sent\nI0625 23:50:08.496573 933 log.go:172] (0xc0009ef080) Data frame received for 5\nI0625 23:50:08.496578 933 log.go:172] (0xc000628c80) (5) Data frame handling\nI0625 23:50:08.496583 933 log.go:172] (0xc000628c80) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.148.9:80/\nI0625 23:50:08.499401 933 log.go:172] (0xc0009ef080) Data frame received for 3\nI0625 23:50:08.499429 933 log.go:172] (0xc000aa0640) (3) Data frame handling\nI0625 23:50:08.499447 933 log.go:172] (0xc000aa0640) (3) Data frame sent\nI0625 23:50:08.499699 933 log.go:172] (0xc0009ef080) Data frame received for 3\nI0625 23:50:08.499716 933 log.go:172] (0xc000aa0640) (3) Data frame handling\nI0625 23:50:08.499725 933 log.go:172] (0xc000aa0640) (3) Data frame sent\nI0625 23:50:08.499739 933 log.go:172] (0xc0009ef080) Data frame received for 5\nI0625 23:50:08.499744 933 log.go:172] (0xc000628c80) (5) Data frame handling\nI0625 23:50:08.499749 933 log.go:172] (0xc000628c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.148.9:80/\nI0625 23:50:08.502730 933 log.go:172] (0xc0009ef080) Data frame received for 3\nI0625 23:50:08.502750 933 log.go:172] (0xc000aa0640) (3) Data frame handling\nI0625 23:50:08.502766 933 log.go:172] (0xc000aa0640) (3) Data frame sent\nI0625 23:50:08.502921 933 log.go:172] (0xc0009ef080) Data frame received for 5\nI0625 23:50:08.502933 933 log.go:172] (0xc000628c80) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.148.9:80/\nI0625 23:50:08.502944 933 log.go:172] (0xc0009ef080) Data frame received for 3\nI0625 23:50:08.502958 933 log.go:172] (0xc000aa0640) (3) Data frame handling\nI0625 23:50:08.502964 933 log.go:172] (0xc000aa0640) (3) Data frame sent\nI0625 23:50:08.502975 933 log.go:172] (0xc000628c80) (5) Data frame sent\nI0625 23:50:08.505877 933 log.go:172] (0xc0009ef080) Data frame received for 3\nI0625 23:50:08.505893 933 log.go:172] (0xc000aa0640) (3) Data frame handling\nI0625 23:50:08.505909 933 log.go:172] (0xc000aa0640) (3) Data frame sent\nI0625 23:50:08.506235 933 log.go:172] (0xc0009ef080) Data frame received for 3\nI0625 23:50:08.506247 933 log.go:172] (0xc000aa0640) (3) Data frame handling\nI0625 23:50:08.506261 933 log.go:172] (0xc000aa0640) (3) Data frame sent\nI0625 23:50:08.506268 933 log.go:172] (0xc0009ef080) Data frame received for 5\nI0625 23:50:08.506273 933 log.go:172] (0xc000628c80) (5) Data frame handling\nI0625 23:50:08.506278 933 log.go:172] (0xc000628c80) (5) Data frame sent\nI0625 23:50:08.506287 933 log.go:172] (0xc0009ef080) Data frame received for 5\nI0625 23:50:08.506292 933 log.go:172] (0xc000628c80) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.148.9:80/\nI0625 23:50:08.506304 933 log.go:172] (0xc000628c80) (5) Data frame sent\nI0625 23:50:08.509251 933 log.go:172] (0xc0009ef080) Data frame received for 3\nI0625 23:50:08.509272 933 log.go:172] (0xc000aa0640) (3) Data frame handling\nI0625 23:50:08.509285 933 log.go:172] (0xc000aa0640) (3) Data frame sent\nI0625 23:50:08.509808 933 log.go:172] (0xc0009ef080) Data frame received for 5\nI0625 23:50:08.509821 933 log.go:172] (0xc000628c80) (5) Data frame handling\nI0625 23:50:08.509830 933 log.go:172] (0xc000628c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.148.9:80/\nI0625 23:50:08.509846 933 log.go:172] (0xc0009ef080) Data frame received for 3\nI0625 23:50:08.509866 933 log.go:172] (0xc000aa0640) (3) Data frame handling\nI0625 23:50:08.509875 933 log.go:172] (0xc000aa0640) (3) Data frame sent\nI0625 23:50:08.512333 933 log.go:172] (0xc0009ef080) Data frame received for 3\nI0625 23:50:08.512347 933 log.go:172] (0xc000aa0640) (3) Data frame handling\nI0625 23:50:08.512363 933 log.go:172] (0xc000aa0640) (3) Data frame sent\nI0625 23:50:08.512675 933 log.go:172] (0xc0009ef080) Data frame received for 3\nI0625 23:50:08.512688 933 log.go:172] (0xc000aa0640) (3) Data frame handling\nI0625 23:50:08.512704 933 log.go:172] (0xc000aa0640) (3) Data frame sent\nI0625 23:50:08.512717 933 log.go:172] (0xc0009ef080) Data frame received for 5\nI0625 23:50:08.512724 933 log.go:172] (0xc000628c80) (5) Data frame handling\nI0625 23:50:08.512730 933 log.go:172] (0xc000628c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.148.9:80/\nI0625 23:50:08.515823 933 log.go:172] (0xc0009ef080) Data frame received for 3\nI0625 23:50:08.515841 933 log.go:172] (0xc000aa0640) (3) Data frame handling\nI0625 23:50:08.515853 933 log.go:172] (0xc000aa0640) (3) Data frame sent\nI0625 23:50:08.516068 933 log.go:172] (0xc0009ef080) Data frame received for 3\nI0625 23:50:08.516083 933 log.go:172] (0xc000aa0640) (3) Data frame handling\nI0625 23:50:08.516093 933 log.go:172] (0xc000aa0640) (3) Data frame sent\nI0625 23:50:08.516102 933 log.go:172] (0xc0009ef080) Data frame received for 5\nI0625 23:50:08.516108 933 log.go:172] (0xc000628c80) (5) Data frame handling\nI0625 23:50:08.516116 933 log.go:172] (0xc000628c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.148.9:80/\nI0625 23:50:08.519388 933 log.go:172] (0xc0009ef080) Data frame received for 3\nI0625 23:50:08.519420 933 log.go:172] (0xc000aa0640) (3) Data frame handling\nI0625 23:50:08.519441 933 log.go:172] (0xc000aa0640) (3) Data frame sent\nI0625 23:50:08.519600 933 log.go:172] (0xc0009ef080) Data frame received for 3\nI0625 23:50:08.519626 933 log.go:172] (0xc000aa0640) (3) Data frame handling\nI0625 23:50:08.519635 933 log.go:172] (0xc000aa0640) (3) Data frame sent\nI0625 23:50:08.519648 933 log.go:172] (0xc0009ef080) Data frame received for 5\nI0625 23:50:08.519654 933 log.go:172] (0xc000628c80) (5) Data frame handling\nI0625 23:50:08.519662 933 log.go:172] (0xc000628c80) (5) Data frame sent\nI0625 23:50:08.519674 933 log.go:172] (0xc0009ef080) Data frame received for 5\nI0625 23:50:08.519681 933 log.go:172] (0xc000628c80) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.148.9:80/\nI0625 23:50:08.519698 933 log.go:172] (0xc000628c80) (5) Data frame sent\nI0625 23:50:08.525750 933 log.go:172] (0xc0009ef080) Data frame received for 3\nI0625 23:50:08.525766 933 log.go:172] (0xc000aa0640) (3) Data frame handling\nI0625 23:50:08.525778 933 log.go:172] (0xc000aa0640) (3) Data frame sent\nI0625 23:50:08.526344 933 log.go:172] (0xc0009ef080) Data frame received for 5\nI0625 23:50:08.526366 933 log.go:172] (0xc000628c80) (5) Data frame handling\nI0625 23:50:08.526378 933 log.go:172] (0xc000628c80) (5) Data frame sent\nI0625 23:50:08.526386 933 log.go:172] (0xc0009ef080) Data frame received for 5\nI0625 23:50:08.526394 933 log.go:172] (0xc000628c80) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.148.9:80/\nI0625 23:50:08.526413 933 log.go:172] (0xc000628c80) (5) Data frame sent\nI0625 23:50:08.526463 933 log.go:172] (0xc0009ef080) Data frame received for 3\nI0625 23:50:08.526492 933 log.go:172] (0xc000aa0640) (3) Data frame handling\nI0625 23:50:08.526534 933 log.go:172] (0xc000aa0640) (3) Data frame sent\nI0625 23:50:08.530432 933 log.go:172] (0xc0009ef080) Data frame received for 3\nI0625 23:50:08.530456 933 log.go:172] (0xc000aa0640) (3) Data frame handling\nI0625 23:50:08.530477 933 log.go:172] (0xc000aa0640) (3) Data frame sent\nI0625 23:50:08.530870 933 log.go:172] (0xc0009ef080) Data frame received for 3\nI0625 23:50:08.530895 933 log.go:172] (0xc0009ef080) Data frame received for 5\nI0625 23:50:08.530928 933 log.go:172] (0xc000628c80) (5) Data frame handling\nI0625 23:50:08.530951 933 log.go:172] (0xc000628c80) (5) Data frame sent\nI0625 23:50:08.530972 933 log.go:172] (0xc0009ef080) Data frame received for 5\nI0625 23:50:08.530991 933 log.go:172] (0xc000628c80) (5) Data frame handling\n+ echo\nI0625 23:50:08.531021 933 log.go:172] (0xc000628c80) (5) Data frame sent\nI0625 23:50:08.531041 933 log.go:172] (0xc000aa0640) (3) Data frame handling\nI0625 23:50:08.531062 933 log.go:172] (0xc000aa0640) (3) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.103.148.9:80/\nI0625 23:50:08.534855 933 log.go:172] (0xc0009ef080) Data frame received for 3\nI0625 23:50:08.534885 933 log.go:172] (0xc000aa0640) (3) Data frame handling\nI0625 23:50:08.534928 933 log.go:172] (0xc000aa0640) (3) Data frame sent\nI0625 23:50:08.535217 933 log.go:172] (0xc0009ef080) Data frame received for 5\nI0625 23:50:08.535242 933 log.go:172] (0xc000628c80) (5) Data frame handling\nI0625 23:50:08.535253 933 log.go:172] (0xc000628c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.148.9:80/\nI0625 23:50:08.535286 933 log.go:172] (0xc0009ef080) Data frame received for 3\nI0625 23:50:08.535312 933 log.go:172] (0xc000aa0640) (3) Data frame handling\nI0625 23:50:08.535330 933 log.go:172] (0xc000aa0640) (3) Data frame sent\nI0625 23:50:08.539427 933 log.go:172] (0xc0009ef080) Data frame received for 3\nI0625 23:50:08.539444 933 log.go:172] (0xc000aa0640) (3) Data frame handling\nI0625 23:50:08.539455 933 log.go:172] (0xc000aa0640) (3) Data frame sent\nI0625 23:50:08.539811 933 log.go:172] (0xc0009ef080) Data frame received for 3\nI0625 23:50:08.539841 933 log.go:172] (0xc000aa0640) (3) Data frame handling\nI0625 23:50:08.539854 933 log.go:172] (0xc000aa0640) (3) Data frame sent\nI0625 23:50:08.539872 933 log.go:172] (0xc0009ef080) Data frame received for 5\nI0625 23:50:08.539881 933 log.go:172] (0xc000628c80) (5) Data frame handling\nI0625 23:50:08.539891 933 log.go:172] (0xc000628c80) (5) Data frame sent\nI0625 23:50:08.539902 933 log.go:172] (0xc0009ef080) Data frame received for 5\nI0625 23:50:08.539918 933 log.go:172] (0xc000628c80) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.148.9:80/\nI0625 23:50:08.539943 933 log.go:172] (0xc000628c80) (5) Data frame sent\nI0625 23:50:08.543947 933 log.go:172] (0xc0009ef080) Data frame received for 3\nI0625 23:50:08.543963 933 log.go:172] (0xc000aa0640) (3) Data frame handling\nI0625 23:50:08.543976 933 log.go:172] (0xc000aa0640) (3) Data frame sent\nI0625 23:50:08.544506 933 log.go:172] (0xc0009ef080) Data frame received for 3\nI0625 23:50:08.544523 933 log.go:172] (0xc000aa0640) (3) Data frame handling\nI0625 23:50:08.544540 933 log.go:172] (0xc0009ef080) Data frame received for 5\nI0625 23:50:08.544561 933 log.go:172] (0xc000628c80) (5) Data frame handling\nI0625 23:50:08.544574 933 log.go:172] (0xc000628c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.148.9:80/\nI0625 23:50:08.544597 933 log.go:172] (0xc000aa0640) (3) Data frame sent\nI0625 23:50:08.548461 933 log.go:172] (0xc0009ef080) Data frame received for 3\nI0625 23:50:08.548480 933 log.go:172] (0xc000aa0640) (3) Data frame handling\nI0625 23:50:08.548491 933 log.go:172] (0xc000aa0640) (3) Data frame sent\nI0625 23:50:08.548820 933 log.go:172] (0xc0009ef080) Data frame received for 3\nI0625 23:50:08.548840 933 log.go:172] (0xc000aa0640) (3) Data frame handling\nI0625 23:50:08.548853 933 log.go:172] (0xc000aa0640) (3) Data frame sent\nI0625 23:50:08.548885 933 log.go:172] (0xc0009ef080) Data frame received for 5\nI0625 23:50:08.548904 933 log.go:172] (0xc000628c80) (5) Data frame handling\nI0625 23:50:08.548924 933 log.go:172] (0xc000628c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.148.9:80/\nI0625 23:50:08.552816 933 log.go:172] (0xc0009ef080) Data frame received for 3\nI0625 23:50:08.552841 933 log.go:172] (0xc000aa0640) (3) Data frame handling\nI0625 23:50:08.552862 933 log.go:172] (0xc000aa0640) (3) Data frame sent\nI0625 23:50:08.553429 933 log.go:172] (0xc0009ef080) Data frame received for 3\nI0625 23:50:08.553467 933 log.go:172] (0xc000aa0640) (3) Data frame handling\nI0625 23:50:08.553483 933 log.go:172] (0xc000aa0640) (3) Data frame sent\nI0625 23:50:08.553505 933 log.go:172] (0xc0009ef080) Data frame received for 5\nI0625 23:50:08.553515 933 log.go:172] (0xc000628c80) (5) Data frame handling\nI0625 23:50:08.553532 933 log.go:172] (0xc000628c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.148.9:80/\nI0625 23:50:08.557616 933 log.go:172] (0xc0009ef080) Data frame received for 3\nI0625 23:50:08.557636 933 log.go:172] (0xc000aa0640) (3) Data frame handling\nI0625 23:50:08.557656 933 log.go:172] (0xc000aa0640) (3) Data frame sent\nI0625 23:50:08.557954 933 log.go:172] (0xc0009ef080) Data frame received for 3\nI0625 23:50:08.557976 933 log.go:172] (0xc0009ef080) Data frame received for 5\nI0625 23:50:08.558000 933 log.go:172] (0xc000628c80) (5) Data frame handling\nI0625 23:50:08.558012 933 log.go:172] (0xc000628c80) (5) Data frame sent\nI0625 23:50:08.558028 933 log.go:172] (0xc0009ef080) Data frame received for 5\nI0625 23:50:08.558048 933 log.go:172] (0xc000628c80) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.148.9:80/\nI0625 23:50:08.558072 933 log.go:172] (0xc000628c80) (5) Data frame sent\nI0625 23:50:08.558086 933 log.go:172] (0xc000aa0640) (3) Data frame handling\nI0625 23:50:08.558101 933 log.go:172] (0xc000aa0640) (3) Data frame sent\nI0625 23:50:08.561897 933 log.go:172] (0xc0009ef080) Data frame received for 3\nI0625 23:50:08.561951 933 log.go:172] (0xc000aa0640) (3) Data frame handling\nI0625 23:50:08.561986 933 log.go:172] (0xc000aa0640) (3) Data frame sent\nI0625 23:50:08.562429 933 log.go:172] (0xc0009ef080) Data frame received for 5\nI0625 23:50:08.562449 933 log.go:172] (0xc000628c80) (5) Data frame handling\nI0625 23:50:08.562472 933 log.go:172] (0xc0009ef080) Data frame received for 3\nI0625 23:50:08.562502 933 log.go:172] (0xc000aa0640) (3) Data frame handling\nI0625 23:50:08.564215 933 log.go:172] (0xc0009ef080) Data frame received for 1\nI0625 23:50:08.564250 933 log.go:172] (0xc0006285a0) (1) Data frame handling\nI0625 23:50:08.564274 933 log.go:172] (0xc0006285a0) (1) Data frame sent\nI0625 23:50:08.564288 933 log.go:172] (0xc0009ef080) (0xc0006285a0) Stream removed, broadcasting: 1\nI0625 23:50:08.564303 933 log.go:172] (0xc0009ef080) Go away received\nI0625 23:50:08.564865 933 log.go:172] (0xc0009ef080) (0xc0006285a0) Stream removed, broadcasting: 1\nI0625 23:50:08.564889 933 log.go:172] (0xc0009ef080) (0xc000aa0640) Stream removed, broadcasting: 3\nI0625 23:50:08.564907 933 log.go:172] (0xc0009ef080) (0xc000628c80) Stream removed, broadcasting: 5\n" Jun 25 23:50:08.570: INFO: stdout: "\naffinity-clusterip-7phg6\naffinity-clusterip-7phg6\naffinity-clusterip-7phg6\naffinity-clusterip-7phg6\naffinity-clusterip-7phg6\naffinity-clusterip-7phg6\naffinity-clusterip-7phg6\naffinity-clusterip-7phg6\naffinity-clusterip-7phg6\naffinity-clusterip-7phg6\naffinity-clusterip-7phg6\naffinity-clusterip-7phg6\naffinity-clusterip-7phg6\naffinity-clusterip-7phg6\naffinity-clusterip-7phg6\naffinity-clusterip-7phg6" Jun 25 23:50:08.570: INFO: Received response from host: Jun 25 23:50:08.570: INFO: Received response from host: affinity-clusterip-7phg6 Jun 25 23:50:08.570: INFO: Received response from host: affinity-clusterip-7phg6 Jun 25 23:50:08.570: INFO: Received response from host: affinity-clusterip-7phg6 Jun 25 23:50:08.570: INFO: Received response from host: affinity-clusterip-7phg6 Jun 25 23:50:08.570: INFO: Received response from host: affinity-clusterip-7phg6 Jun 25 23:50:08.570: INFO: Received response from host: affinity-clusterip-7phg6 Jun 25 23:50:08.571: INFO: Received response from host: affinity-clusterip-7phg6 Jun 25 23:50:08.571: INFO: Received response from host: affinity-clusterip-7phg6 Jun 25 23:50:08.571: INFO: Received response from host: affinity-clusterip-7phg6 Jun 25 23:50:08.571: INFO: Received response from host: affinity-clusterip-7phg6 Jun 25 23:50:08.571: INFO: Received response from host: affinity-clusterip-7phg6 Jun 25 23:50:08.571: INFO: Received response from host: affinity-clusterip-7phg6 Jun 25 23:50:08.571: INFO: Received response from host: affinity-clusterip-7phg6 Jun 25 23:50:08.571: INFO: Received response from host: affinity-clusterip-7phg6 Jun 25 23:50:08.571: INFO: Received response from host: affinity-clusterip-7phg6 Jun 25 23:50:08.571: INFO: Received response from host: affinity-clusterip-7phg6 Jun 25 23:50:08.571: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-9915, will wait for the garbage collector to delete the pods Jun 25 23:50:08.679: INFO: Deleting ReplicationController affinity-clusterip took: 5.78283ms Jun 25 23:50:09.079: INFO: Terminating ReplicationController affinity-clusterip pods took: 400.257429ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:50:25.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9915" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:813 • [SLOW TEST:28.394 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":294,"completed":54,"skipped":780,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:50:25.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-50b03187-3395-4f6b-91c4-434a6cb1e834 STEP: Creating a pod to test consume configMaps Jun 25 23:50:25.119: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a93065a6-0fa7-4b79-bed3-911614aad797" in namespace "projected-2505" to be "Succeeded or Failed" Jun 25 23:50:25.160: INFO: Pod "pod-projected-configmaps-a93065a6-0fa7-4b79-bed3-911614aad797": Phase="Pending", Reason="", readiness=false. Elapsed: 40.512257ms Jun 25 23:50:27.165: INFO: Pod "pod-projected-configmaps-a93065a6-0fa7-4b79-bed3-911614aad797": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045821174s Jun 25 23:50:29.169: INFO: Pod "pod-projected-configmaps-a93065a6-0fa7-4b79-bed3-911614aad797": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049503989s STEP: Saw pod success Jun 25 23:50:29.169: INFO: Pod "pod-projected-configmaps-a93065a6-0fa7-4b79-bed3-911614aad797" satisfied condition "Succeeded or Failed" Jun 25 23:50:29.171: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-a93065a6-0fa7-4b79-bed3-911614aad797 container projected-configmap-volume-test: STEP: delete the pod Jun 25 23:50:29.204: INFO: Waiting for pod pod-projected-configmaps-a93065a6-0fa7-4b79-bed3-911614aad797 to disappear Jun 25 23:50:29.215: INFO: Pod pod-projected-configmaps-a93065a6-0fa7-4b79-bed3-911614aad797 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:50:29.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2505" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":55,"skipped":785,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} S ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:50:29.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-093c1420-0ce1-45b7-ad9a-b00e6e7b8589 in namespace container-probe-2353 Jun 25 23:50:33.394: INFO: Started pod test-webserver-093c1420-0ce1-45b7-ad9a-b00e6e7b8589 in namespace container-probe-2353 STEP: checking the pod's current state and verifying that restartCount is present Jun 25 23:50:33.398: INFO: Initial restart count of pod test-webserver-093c1420-0ce1-45b7-ad9a-b00e6e7b8589 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:54:34.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2353" for this suite. • [SLOW TEST:244.855 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":294,"completed":56,"skipped":786,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:54:34.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 25 23:54:34.414: INFO: Waiting up to 5m0s for pod "downwardapi-volume-522a7d0f-d5cb-4200-8e24-55b449868f00" in namespace "downward-api-3130" to be "Succeeded or Failed" Jun 25 23:54:34.466: INFO: Pod "downwardapi-volume-522a7d0f-d5cb-4200-8e24-55b449868f00": Phase="Pending", Reason="", readiness=false. Elapsed: 52.507385ms Jun 25 23:54:36.471: INFO: Pod "downwardapi-volume-522a7d0f-d5cb-4200-8e24-55b449868f00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057608799s Jun 25 23:54:38.476: INFO: Pod "downwardapi-volume-522a7d0f-d5cb-4200-8e24-55b449868f00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062069527s STEP: Saw pod success Jun 25 23:54:38.476: INFO: Pod "downwardapi-volume-522a7d0f-d5cb-4200-8e24-55b449868f00" satisfied condition "Succeeded or Failed" Jun 25 23:54:38.479: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-522a7d0f-d5cb-4200-8e24-55b449868f00 container client-container: STEP: delete the pod Jun 25 23:54:38.611: INFO: Waiting for pod downwardapi-volume-522a7d0f-d5cb-4200-8e24-55b449868f00 to disappear Jun 25 23:54:38.621: INFO: Pod downwardapi-volume-522a7d0f-d5cb-4200-8e24-55b449868f00 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:54:38.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3130" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":294,"completed":57,"skipped":799,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:54:38.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:54:49.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1355" for this suite. • [SLOW TEST:11.123 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":294,"completed":58,"skipped":820,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:54:49.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:54:54.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2787" for this suite. • [SLOW TEST:5.138 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":294,"completed":59,"skipped":845,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:54:54.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 25 23:54:55.001: INFO: Waiting up to 5m0s for pod "downwardapi-volume-460cf76c-57cd-4d02-8e4b-af78197182e4" in namespace "projected-9064" to be "Succeeded or Failed" Jun 25 23:54:55.005: INFO: Pod "downwardapi-volume-460cf76c-57cd-4d02-8e4b-af78197182e4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.793374ms Jun 25 23:54:57.010: INFO: Pod "downwardapi-volume-460cf76c-57cd-4d02-8e4b-af78197182e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00864548s Jun 25 23:54:59.014: INFO: Pod "downwardapi-volume-460cf76c-57cd-4d02-8e4b-af78197182e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013170302s STEP: Saw pod success Jun 25 23:54:59.014: INFO: Pod "downwardapi-volume-460cf76c-57cd-4d02-8e4b-af78197182e4" satisfied condition "Succeeded or Failed" Jun 25 23:54:59.017: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-460cf76c-57cd-4d02-8e4b-af78197182e4 container client-container: STEP: delete the pod Jun 25 23:54:59.050: INFO: Waiting for pod downwardapi-volume-460cf76c-57cd-4d02-8e4b-af78197182e4 to disappear Jun 25 23:54:59.059: INFO: Pod downwardapi-volume-460cf76c-57cd-4d02-8e4b-af78197182e4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:54:59.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9064" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":294,"completed":60,"skipped":855,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:54:59.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:54:59.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-177" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":294,"completed":61,"skipped":869,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:54:59.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 25 23:54:59.298: INFO: Create a RollingUpdate DaemonSet Jun 25 23:54:59.302: INFO: Check that daemon pods launch on every node of the cluster Jun 25 23:54:59.305: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 25 23:54:59.309: INFO: Number of nodes with available pods: 0 Jun 25 23:54:59.310: INFO: Node latest-worker is running more than one daemon pod Jun 25 23:55:00.484: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 25 23:55:00.488: INFO: Number of nodes with available pods: 0 Jun 25 23:55:00.488: INFO: Node latest-worker is running more than one daemon pod Jun 25 23:55:01.315: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 25 23:55:01.319: INFO: Number of nodes with available pods: 0 Jun 25 23:55:01.319: INFO: Node latest-worker is running more than one daemon pod Jun 25 23:55:02.364: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 25 23:55:02.368: INFO: Number of nodes with available pods: 0 Jun 25 23:55:02.368: INFO: Node latest-worker is running more than one daemon pod Jun 25 23:55:03.315: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 25 23:55:03.318: INFO: Number of nodes with available pods: 1 Jun 25 23:55:03.319: INFO: Node latest-worker is running more than one daemon pod Jun 25 23:55:04.334: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 25 23:55:04.339: INFO: Number of nodes with available pods: 2 Jun 25 23:55:04.339: INFO: Number of running nodes: 2, number of available pods: 2 Jun 25 23:55:04.339: INFO: Update the DaemonSet to trigger a rollout Jun 25 23:55:04.348: INFO: Updating DaemonSet daemon-set Jun 25 23:55:15.404: INFO: Roll back the DaemonSet before rollout is complete Jun 25 23:55:15.412: INFO: Updating DaemonSet daemon-set Jun 25 23:55:15.412: INFO: Make sure DaemonSet rollback is complete Jun 25 23:55:15.430: INFO: Wrong image for pod: daemon-set-4kksd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jun 25 23:55:15.430: INFO: Pod daemon-set-4kksd is not available Jun 25 23:55:15.442: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 25 23:55:16.447: INFO: Wrong image for pod: daemon-set-4kksd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jun 25 23:55:16.447: INFO: Pod daemon-set-4kksd is not available Jun 25 23:55:16.452: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 25 23:55:17.447: INFO: Wrong image for pod: daemon-set-4kksd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jun 25 23:55:17.447: INFO: Pod daemon-set-4kksd is not available Jun 25 23:55:17.451: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 25 23:55:18.448: INFO: Pod daemon-set-lzzw4 is not available Jun 25 23:55:18.453: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4676, will wait for the garbage collector to delete the pods Jun 25 23:55:18.532: INFO: Deleting DaemonSet.extensions daemon-set took: 20.679904ms Jun 25 23:55:18.933: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.563925ms Jun 25 23:55:25.336: INFO: Number of nodes with available pods: 0 Jun 25 23:55:25.336: INFO: Number of running nodes: 0, number of available pods: 0 Jun 25 23:55:25.341: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4676/daemonsets","resourceVersion":"15906751"},"items":null} Jun 25 23:55:25.344: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4676/pods","resourceVersion":"15906751"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:55:25.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4676" for this suite. • [SLOW TEST:26.194 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":294,"completed":62,"skipped":876,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:55:25.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 25 23:55:25.455: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Jun 25 23:55:27.418: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6342 create -f -' Jun 25 23:55:33.085: INFO: stderr: "" Jun 25 23:55:33.085: INFO: stdout: "e2e-test-crd-publish-openapi-7331-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jun 25 23:55:33.085: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6342 delete e2e-test-crd-publish-openapi-7331-crds test-foo' Jun 25 23:55:33.216: INFO: stderr: "" Jun 25 23:55:33.216: INFO: stdout: "e2e-test-crd-publish-openapi-7331-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Jun 25 23:55:33.216: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6342 apply -f -' Jun 25 23:55:36.375: INFO: stderr: "" Jun 25 23:55:36.375: INFO: stdout: "e2e-test-crd-publish-openapi-7331-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jun 25 23:55:36.375: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6342 delete e2e-test-crd-publish-openapi-7331-crds test-foo' Jun 25 23:55:36.559: INFO: stderr: "" Jun 25 23:55:36.559: INFO: stdout: "e2e-test-crd-publish-openapi-7331-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Jun 25 23:55:36.559: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6342 create -f -' Jun 25 23:55:39.826: INFO: rc: 1 Jun 25 23:55:39.826: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6342 apply -f -' Jun 25 23:55:40.219: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Jun 25 23:55:40.219: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6342 create -f -' Jun 25 23:55:40.477: INFO: rc: 1 Jun 25 23:55:40.477: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6342 apply -f -' Jun 25 23:55:40.728: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Jun 25 23:55:40.728: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7331-crds' Jun 25 23:55:41.029: INFO: stderr: "" Jun 25 23:55:41.029: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7331-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Jun 25 23:55:41.030: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7331-crds.metadata' Jun 25 23:55:41.298: INFO: stderr: "" Jun 25 23:55:41.298: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7331-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Jun 25 23:55:41.299: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7331-crds.spec' Jun 25 23:55:41.570: INFO: stderr: "" Jun 25 23:55:41.570: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7331-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Jun 25 23:55:41.571: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7331-crds.spec.bars' Jun 25 23:55:41.830: INFO: stderr: "" Jun 25 23:55:41.830: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7331-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Jun 25 23:55:41.830: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7331-crds.spec.bars2' Jun 25 23:55:42.098: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:55:44.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6342" for this suite. • [SLOW TEST:18.659 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":294,"completed":63,"skipped":891,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:55:44.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-fzt84 in namespace proxy-4279 I0625 23:55:44.154153 8 runners.go:190] Created replication controller with name: proxy-service-fzt84, namespace: proxy-4279, replica count: 1 I0625 23:55:45.204577 8 runners.go:190] proxy-service-fzt84 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0625 23:55:46.204822 8 runners.go:190] proxy-service-fzt84 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0625 23:55:47.205068 8 runners.go:190] proxy-service-fzt84 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0625 23:55:48.205618 8 runners.go:190] proxy-service-fzt84 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0625 23:55:49.205865 8 runners.go:190] proxy-service-fzt84 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0625 23:55:50.206097 8 runners.go:190] proxy-service-fzt84 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 25 23:55:50.255: INFO: setup took 6.150807821s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jun 25 23:55:50.262: INFO: (0) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6/proxy/: test (200; 5.993901ms) Jun 25 23:55:50.262: INFO: (0) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6:160/proxy/: foo (200; 6.01683ms) Jun 25 23:55:50.266: INFO: (0) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:160/proxy/: foo (200; 10.495269ms) Jun 25 23:55:50.267: INFO: (0) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:162/proxy/: bar (200; 10.907501ms) Jun 25 23:55:50.267: INFO: (0) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6:162/proxy/: bar (200; 11.290991ms) Jun 25 23:55:50.267: INFO: (0) /api/v1/namespaces/proxy-4279/services/http:proxy-service-fzt84:portname2/proxy/: bar (200; 11.440115ms) Jun 25 23:55:50.267: INFO: (0) /api/v1/namespaces/proxy-4279/services/proxy-service-fzt84:portname1/proxy/: foo (200; 11.563659ms) Jun 25 23:55:50.268: INFO: (0) /api/v1/namespaces/proxy-4279/services/proxy-service-fzt84:portname2/proxy/: bar (200; 12.215323ms) Jun 25 23:55:50.268: INFO: (0) /api/v1/namespaces/proxy-4279/services/http:proxy-service-fzt84:portname1/proxy/: foo (200; 12.043902ms) Jun 25 23:55:50.268: INFO: (0) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:1080/proxy/: ... (200; 12.411329ms) Jun 25 23:55:50.268: INFO: (0) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6:1080/proxy/: test<... (200; 12.608204ms) Jun 25 23:55:50.306: INFO: (0) /api/v1/namespaces/proxy-4279/services/https:proxy-service-fzt84:tlsportname1/proxy/: tls baz (200; 49.922992ms) Jun 25 23:55:50.306: INFO: (0) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:443/proxy/: test (200; 5.526914ms) Jun 25 23:55:50.312: INFO: (1) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:443/proxy/: test<... (200; 5.623724ms) Jun 25 23:55:50.312: INFO: (1) /api/v1/namespaces/proxy-4279/services/http:proxy-service-fzt84:portname2/proxy/: bar (200; 5.683118ms) Jun 25 23:55:50.313: INFO: (1) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:1080/proxy/: ... (200; 7.150629ms) Jun 25 23:55:50.317: INFO: (1) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:462/proxy/: tls qux (200; 10.619604ms) Jun 25 23:55:50.321: INFO: (2) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:1080/proxy/: ... (200; 3.71502ms) Jun 25 23:55:50.321: INFO: (2) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6:162/proxy/: bar (200; 4.262501ms) Jun 25 23:55:50.321: INFO: (2) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:443/proxy/: test<... (200; 4.629308ms) Jun 25 23:55:50.323: INFO: (2) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:460/proxy/: tls baz (200; 5.796025ms) Jun 25 23:55:50.323: INFO: (2) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:162/proxy/: bar (200; 5.931404ms) Jun 25 23:55:50.323: INFO: (2) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:462/proxy/: tls qux (200; 5.995421ms) Jun 25 23:55:50.323: INFO: (2) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6:160/proxy/: foo (200; 6.091705ms) Jun 25 23:55:50.323: INFO: (2) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:160/proxy/: foo (200; 6.170651ms) Jun 25 23:55:50.323: INFO: (2) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6/proxy/: test (200; 6.25029ms) Jun 25 23:55:50.324: INFO: (2) /api/v1/namespaces/proxy-4279/services/proxy-service-fzt84:portname1/proxy/: foo (200; 7.489599ms) Jun 25 23:55:50.324: INFO: (2) /api/v1/namespaces/proxy-4279/services/https:proxy-service-fzt84:tlsportname2/proxy/: tls qux (200; 7.435174ms) Jun 25 23:55:50.324: INFO: (2) /api/v1/namespaces/proxy-4279/services/https:proxy-service-fzt84:tlsportname1/proxy/: tls baz (200; 7.473264ms) Jun 25 23:55:50.324: INFO: (2) /api/v1/namespaces/proxy-4279/services/proxy-service-fzt84:portname2/proxy/: bar (200; 7.445283ms) Jun 25 23:55:50.324: INFO: (2) /api/v1/namespaces/proxy-4279/services/http:proxy-service-fzt84:portname2/proxy/: bar (200; 7.537433ms) Jun 25 23:55:50.324: INFO: (2) /api/v1/namespaces/proxy-4279/services/http:proxy-service-fzt84:portname1/proxy/: foo (200; 7.539547ms) Jun 25 23:55:50.328: INFO: (3) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6:162/proxy/: bar (200; 3.893822ms) Jun 25 23:55:50.329: INFO: (3) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:460/proxy/: tls baz (200; 3.721876ms) Jun 25 23:55:50.329: INFO: (3) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:462/proxy/: tls qux (200; 3.975738ms) Jun 25 23:55:50.329: INFO: (3) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:160/proxy/: foo (200; 4.657524ms) Jun 25 23:55:50.329: INFO: (3) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6:160/proxy/: foo (200; 4.682446ms) Jun 25 23:55:50.329: INFO: (3) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:162/proxy/: bar (200; 4.66367ms) Jun 25 23:55:50.330: INFO: (3) /api/v1/namespaces/proxy-4279/services/http:proxy-service-fzt84:portname2/proxy/: bar (200; 5.059029ms) Jun 25 23:55:50.331: INFO: (3) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6:1080/proxy/: test<... (200; 6.305077ms) Jun 25 23:55:50.332: INFO: (3) /api/v1/namespaces/proxy-4279/services/proxy-service-fzt84:portname2/proxy/: bar (200; 6.7833ms) Jun 25 23:55:50.332: INFO: (3) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:443/proxy/: test (200; 7.776066ms) Jun 25 23:55:50.333: INFO: (3) /api/v1/namespaces/proxy-4279/services/http:proxy-service-fzt84:portname1/proxy/: foo (200; 7.814674ms) Jun 25 23:55:50.333: INFO: (3) /api/v1/namespaces/proxy-4279/services/proxy-service-fzt84:portname1/proxy/: foo (200; 8.246283ms) Jun 25 23:55:50.333: INFO: (3) /api/v1/namespaces/proxy-4279/services/https:proxy-service-fzt84:tlsportname2/proxy/: tls qux (200; 8.015287ms) Jun 25 23:55:50.333: INFO: (3) /api/v1/namespaces/proxy-4279/services/https:proxy-service-fzt84:tlsportname1/proxy/: tls baz (200; 7.779156ms) Jun 25 23:55:50.333: INFO: (3) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:1080/proxy/: ... (200; 8.234686ms) Jun 25 23:55:50.342: INFO: (4) /api/v1/namespaces/proxy-4279/services/proxy-service-fzt84:portname1/proxy/: foo (200; 9.207255ms) Jun 25 23:55:50.342: INFO: (4) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6:162/proxy/: bar (200; 9.347812ms) Jun 25 23:55:50.342: INFO: (4) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:160/proxy/: foo (200; 9.252436ms) Jun 25 23:55:50.342: INFO: (4) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:162/proxy/: bar (200; 9.186336ms) Jun 25 23:55:50.342: INFO: (4) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:460/proxy/: tls baz (200; 9.421773ms) Jun 25 23:55:50.346: INFO: (4) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6:160/proxy/: foo (200; 12.572354ms) Jun 25 23:55:50.346: INFO: (4) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6/proxy/: test (200; 12.692246ms) Jun 25 23:55:50.346: INFO: (4) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:1080/proxy/: ... (200; 12.659892ms) Jun 25 23:55:50.346: INFO: (4) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6:1080/proxy/: test<... (200; 12.775661ms) Jun 25 23:55:50.346: INFO: (4) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:443/proxy/: ... (200; 2.269822ms) Jun 25 23:55:50.351: INFO: (5) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:460/proxy/: tls baz (200; 4.567574ms) Jun 25 23:55:50.351: INFO: (5) /api/v1/namespaces/proxy-4279/services/http:proxy-service-fzt84:portname1/proxy/: foo (200; 5.23939ms) Jun 25 23:55:50.351: INFO: (5) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6:162/proxy/: bar (200; 5.311879ms) Jun 25 23:55:50.352: INFO: (5) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:162/proxy/: bar (200; 5.346903ms) Jun 25 23:55:50.352: INFO: (5) /api/v1/namespaces/proxy-4279/services/http:proxy-service-fzt84:portname2/proxy/: bar (200; 5.326472ms) Jun 25 23:55:50.352: INFO: (5) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6:1080/proxy/: test<... (200; 5.37421ms) Jun 25 23:55:50.352: INFO: (5) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:443/proxy/: test (200; 5.483942ms) Jun 25 23:55:50.352: INFO: (5) /api/v1/namespaces/proxy-4279/services/proxy-service-fzt84:portname2/proxy/: bar (200; 5.459975ms) Jun 25 23:55:50.352: INFO: (5) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:462/proxy/: tls qux (200; 5.885534ms) Jun 25 23:55:50.352: INFO: (5) /api/v1/namespaces/proxy-4279/services/https:proxy-service-fzt84:tlsportname2/proxy/: tls qux (200; 5.893684ms) Jun 25 23:55:50.352: INFO: (5) /api/v1/namespaces/proxy-4279/services/proxy-service-fzt84:portname1/proxy/: foo (200; 5.968295ms) Jun 25 23:55:50.381: INFO: (6) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6:160/proxy/: foo (200; 28.776001ms) Jun 25 23:55:50.381: INFO: (6) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:1080/proxy/: ... (200; 28.79604ms) Jun 25 23:55:50.381: INFO: (6) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:462/proxy/: tls qux (200; 29.076787ms) Jun 25 23:55:50.382: INFO: (6) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6:1080/proxy/: test<... (200; 30.051064ms) Jun 25 23:55:50.382: INFO: (6) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:460/proxy/: tls baz (200; 30.036258ms) Jun 25 23:55:50.382: INFO: (6) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:443/proxy/: test (200; 30.098047ms) Jun 25 23:55:50.382: INFO: (6) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:160/proxy/: foo (200; 30.053836ms) Jun 25 23:55:50.382: INFO: (6) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:162/proxy/: bar (200; 30.13917ms) Jun 25 23:55:50.384: INFO: (6) /api/v1/namespaces/proxy-4279/services/http:proxy-service-fzt84:portname2/proxy/: bar (200; 31.434916ms) Jun 25 23:55:50.384: INFO: (6) /api/v1/namespaces/proxy-4279/services/proxy-service-fzt84:portname1/proxy/: foo (200; 31.53213ms) Jun 25 23:55:50.384: INFO: (6) /api/v1/namespaces/proxy-4279/services/proxy-service-fzt84:portname2/proxy/: bar (200; 31.613762ms) Jun 25 23:55:50.384: INFO: (6) /api/v1/namespaces/proxy-4279/services/https:proxy-service-fzt84:tlsportname2/proxy/: tls qux (200; 31.62823ms) Jun 25 23:55:50.384: INFO: (6) /api/v1/namespaces/proxy-4279/services/https:proxy-service-fzt84:tlsportname1/proxy/: tls baz (200; 31.699736ms) Jun 25 23:55:50.384: INFO: (6) /api/v1/namespaces/proxy-4279/services/http:proxy-service-fzt84:portname1/proxy/: foo (200; 31.742318ms) Jun 25 23:55:50.389: INFO: (7) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:443/proxy/: test<... (200; 5.908818ms) Jun 25 23:55:50.390: INFO: (7) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6:162/proxy/: bar (200; 5.987014ms) Jun 25 23:55:50.390: INFO: (7) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6/proxy/: test (200; 5.980812ms) Jun 25 23:55:50.390: INFO: (7) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:1080/proxy/: ... (200; 6.10895ms) Jun 25 23:55:50.391: INFO: (7) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:462/proxy/: tls qux (200; 7.107773ms) Jun 25 23:55:50.392: INFO: (7) /api/v1/namespaces/proxy-4279/services/https:proxy-service-fzt84:tlsportname2/proxy/: tls qux (200; 7.370134ms) Jun 25 23:55:50.392: INFO: (7) /api/v1/namespaces/proxy-4279/services/http:proxy-service-fzt84:portname2/proxy/: bar (200; 7.609452ms) Jun 25 23:55:50.392: INFO: (7) /api/v1/namespaces/proxy-4279/services/proxy-service-fzt84:portname2/proxy/: bar (200; 7.474676ms) Jun 25 23:55:50.392: INFO: (7) /api/v1/namespaces/proxy-4279/services/http:proxy-service-fzt84:portname1/proxy/: foo (200; 7.569528ms) Jun 25 23:55:50.392: INFO: (7) /api/v1/namespaces/proxy-4279/services/proxy-service-fzt84:portname1/proxy/: foo (200; 7.53866ms) Jun 25 23:55:50.394: INFO: (7) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:160/proxy/: foo (200; 9.191284ms) Jun 25 23:55:50.394: INFO: (7) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6:160/proxy/: foo (200; 9.490171ms) Jun 25 23:55:50.394: INFO: (7) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:460/proxy/: tls baz (200; 9.286025ms) Jun 25 23:55:50.394: INFO: (7) /api/v1/namespaces/proxy-4279/services/https:proxy-service-fzt84:tlsportname1/proxy/: tls baz (200; 9.300727ms) Jun 25 23:55:50.397: INFO: (8) /api/v1/namespaces/proxy-4279/services/http:proxy-service-fzt84:portname1/proxy/: foo (200; 3.521323ms) Jun 25 23:55:50.398: INFO: (8) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:462/proxy/: tls qux (200; 4.038349ms) Jun 25 23:55:50.398: INFO: (8) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6:162/proxy/: bar (200; 4.007429ms) Jun 25 23:55:50.398: INFO: (8) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6:1080/proxy/: test<... (200; 4.048728ms) Jun 25 23:55:50.398: INFO: (8) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:460/proxy/: tls baz (200; 4.160976ms) Jun 25 23:55:50.398: INFO: (8) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:160/proxy/: foo (200; 4.284289ms) Jun 25 23:55:50.398: INFO: (8) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6:160/proxy/: foo (200; 4.461837ms) Jun 25 23:55:50.398: INFO: (8) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:443/proxy/: ... (200; 4.742682ms) Jun 25 23:55:50.399: INFO: (8) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6/proxy/: test (200; 5.222669ms) Jun 25 23:55:50.399: INFO: (8) /api/v1/namespaces/proxy-4279/services/https:proxy-service-fzt84:tlsportname1/proxy/: tls baz (200; 5.428692ms) Jun 25 23:55:50.399: INFO: (8) /api/v1/namespaces/proxy-4279/services/proxy-service-fzt84:portname2/proxy/: bar (200; 5.482476ms) Jun 25 23:55:50.399: INFO: (8) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:162/proxy/: bar (200; 5.532711ms) Jun 25 23:55:50.399: INFO: (8) /api/v1/namespaces/proxy-4279/services/proxy-service-fzt84:portname1/proxy/: foo (200; 5.456648ms) Jun 25 23:55:50.399: INFO: (8) /api/v1/namespaces/proxy-4279/services/https:proxy-service-fzt84:tlsportname2/proxy/: tls qux (200; 5.434916ms) Jun 25 23:55:50.400: INFO: (8) /api/v1/namespaces/proxy-4279/services/http:proxy-service-fzt84:portname2/proxy/: bar (200; 6.195136ms) Jun 25 23:55:50.404: INFO: (9) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6:162/proxy/: bar (200; 4.132767ms) Jun 25 23:55:50.404: INFO: (9) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:443/proxy/: test<... (200; 4.233575ms) Jun 25 23:55:50.404: INFO: (9) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:460/proxy/: tls baz (200; 4.403416ms) Jun 25 23:55:50.405: INFO: (9) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:462/proxy/: tls qux (200; 4.576246ms) Jun 25 23:55:50.406: INFO: (9) /api/v1/namespaces/proxy-4279/services/proxy-service-fzt84:portname1/proxy/: foo (200; 5.792687ms) Jun 25 23:55:50.406: INFO: (9) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:160/proxy/: foo (200; 5.856498ms) Jun 25 23:55:50.406: INFO: (9) /api/v1/namespaces/proxy-4279/services/https:proxy-service-fzt84:tlsportname1/proxy/: tls baz (200; 5.81219ms) Jun 25 23:55:50.406: INFO: (9) /api/v1/namespaces/proxy-4279/services/http:proxy-service-fzt84:portname1/proxy/: foo (200; 5.90493ms) Jun 25 23:55:50.406: INFO: (9) /api/v1/namespaces/proxy-4279/services/proxy-service-fzt84:portname2/proxy/: bar (200; 5.904291ms) Jun 25 23:55:50.406: INFO: (9) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6/proxy/: test (200; 5.993936ms) Jun 25 23:55:50.406: INFO: (9) /api/v1/namespaces/proxy-4279/services/http:proxy-service-fzt84:portname2/proxy/: bar (200; 5.921313ms) Jun 25 23:55:50.406: INFO: (9) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:162/proxy/: bar (200; 5.930744ms) Jun 25 23:55:50.406: INFO: (9) /api/v1/namespaces/proxy-4279/services/https:proxy-service-fzt84:tlsportname2/proxy/: tls qux (200; 5.968494ms) Jun 25 23:55:50.406: INFO: (9) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:1080/proxy/: ... (200; 5.980675ms) Jun 25 23:55:50.420: INFO: (10) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:443/proxy/: test<... (200; 14.16284ms) Jun 25 23:55:50.420: INFO: (10) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6/proxy/: test (200; 14.153277ms) Jun 25 23:55:50.420: INFO: (10) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:160/proxy/: foo (200; 14.24148ms) Jun 25 23:55:50.420: INFO: (10) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:1080/proxy/: ... (200; 14.212511ms) Jun 25 23:55:50.420: INFO: (10) /api/v1/namespaces/proxy-4279/services/https:proxy-service-fzt84:tlsportname1/proxy/: tls baz (200; 14.317385ms) Jun 25 23:55:50.421: INFO: (10) /api/v1/namespaces/proxy-4279/services/https:proxy-service-fzt84:tlsportname2/proxy/: tls qux (200; 14.69414ms) Jun 25 23:55:50.421: INFO: (10) /api/v1/namespaces/proxy-4279/services/proxy-service-fzt84:portname2/proxy/: bar (200; 14.718172ms) Jun 25 23:55:50.421: INFO: (10) /api/v1/namespaces/proxy-4279/services/proxy-service-fzt84:portname1/proxy/: foo (200; 14.850695ms) Jun 25 23:55:50.421: INFO: (10) /api/v1/namespaces/proxy-4279/services/http:proxy-service-fzt84:portname2/proxy/: bar (200; 14.852407ms) Jun 25 23:55:50.421: INFO: (10) /api/v1/namespaces/proxy-4279/services/http:proxy-service-fzt84:portname1/proxy/: foo (200; 14.77613ms) Jun 25 23:55:50.427: INFO: (11) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6:1080/proxy/: test<... (200; 5.35377ms) Jun 25 23:55:50.427: INFO: (11) /api/v1/namespaces/proxy-4279/services/http:proxy-service-fzt84:portname1/proxy/: foo (200; 5.233241ms) Jun 25 23:55:50.427: INFO: (11) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6/proxy/: test (200; 5.164554ms) Jun 25 23:55:50.427: INFO: (11) /api/v1/namespaces/proxy-4279/services/https:proxy-service-fzt84:tlsportname1/proxy/: tls baz (200; 5.251728ms) Jun 25 23:55:50.427: INFO: (11) /api/v1/namespaces/proxy-4279/services/https:proxy-service-fzt84:tlsportname2/proxy/: tls qux (200; 5.400089ms) Jun 25 23:55:50.427: INFO: (11) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:160/proxy/: foo (200; 5.651491ms) Jun 25 23:55:50.427: INFO: (11) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:1080/proxy/: ... (200; 6.152291ms) Jun 25 23:55:50.427: INFO: (11) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:162/proxy/: bar (200; 5.886259ms) Jun 25 23:55:50.427: INFO: (11) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6:160/proxy/: foo (200; 6.130092ms) Jun 25 23:55:50.427: INFO: (11) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:462/proxy/: tls qux (200; 6.01012ms) Jun 25 23:55:50.427: INFO: (11) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6:162/proxy/: bar (200; 6.020577ms) Jun 25 23:55:50.427: INFO: (11) /api/v1/namespaces/proxy-4279/services/proxy-service-fzt84:portname2/proxy/: bar (200; 5.963751ms) Jun 25 23:55:50.427: INFO: (11) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:460/proxy/: tls baz (200; 5.982434ms) Jun 25 23:55:50.427: INFO: (11) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:443/proxy/: test (200; 5.334173ms) Jun 25 23:55:50.434: INFO: (12) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6:160/proxy/: foo (200; 5.479481ms) Jun 25 23:55:50.434: INFO: (12) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:1080/proxy/: ... (200; 5.706438ms) Jun 25 23:55:50.434: INFO: (12) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6:1080/proxy/: test<... (200; 5.724843ms) Jun 25 23:55:50.434: INFO: (12) /api/v1/namespaces/proxy-4279/services/proxy-service-fzt84:portname2/proxy/: bar (200; 5.883314ms) Jun 25 23:55:50.434: INFO: (12) /api/v1/namespaces/proxy-4279/services/https:proxy-service-fzt84:tlsportname2/proxy/: tls qux (200; 5.888625ms) Jun 25 23:55:50.434: INFO: (12) /api/v1/namespaces/proxy-4279/services/https:proxy-service-fzt84:tlsportname1/proxy/: tls baz (200; 5.906963ms) Jun 25 23:55:50.434: INFO: (12) /api/v1/namespaces/proxy-4279/services/http:proxy-service-fzt84:portname2/proxy/: bar (200; 5.944857ms) Jun 25 23:55:50.434: INFO: (12) /api/v1/namespaces/proxy-4279/services/http:proxy-service-fzt84:portname1/proxy/: foo (200; 6.04154ms) Jun 25 23:55:50.434: INFO: (12) /api/v1/namespaces/proxy-4279/services/proxy-service-fzt84:portname1/proxy/: foo (200; 6.096426ms) Jun 25 23:55:50.438: INFO: (13) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:1080/proxy/: ... (200; 3.220806ms) Jun 25 23:55:50.438: INFO: (13) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:460/proxy/: tls baz (200; 2.835629ms) Jun 25 23:55:50.439: INFO: (13) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6:1080/proxy/: test<... (200; 2.970194ms) Jun 25 23:55:50.439: INFO: (13) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6/proxy/: test (200; 3.153429ms) Jun 25 23:55:50.439: INFO: (13) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:162/proxy/: bar (200; 3.714843ms) Jun 25 23:55:50.439: INFO: (13) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:443/proxy/: test<... (200; 2.7541ms) Jun 25 23:55:50.443: INFO: (14) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:162/proxy/: bar (200; 2.791732ms) Jun 25 23:55:50.443: INFO: (14) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:460/proxy/: tls baz (200; 3.041053ms) Jun 25 23:55:50.444: INFO: (14) /api/v1/namespaces/proxy-4279/services/https:proxy-service-fzt84:tlsportname1/proxy/: tls baz (200; 4.306229ms) Jun 25 23:55:50.449: INFO: (14) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:462/proxy/: tls qux (200; 8.980089ms) Jun 25 23:55:50.449: INFO: (14) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6/proxy/: test (200; 9.146409ms) Jun 25 23:55:50.449: INFO: (14) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:160/proxy/: foo (200; 9.138869ms) Jun 25 23:55:50.449: INFO: (14) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:1080/proxy/: ... (200; 9.197203ms) Jun 25 23:55:50.449: INFO: (14) /api/v1/namespaces/proxy-4279/services/http:proxy-service-fzt84:portname2/proxy/: bar (200; 9.311624ms) Jun 25 23:55:50.451: INFO: (14) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:443/proxy/: test (200; 3.148618ms) Jun 25 23:55:50.456: INFO: (15) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:460/proxy/: tls baz (200; 3.351466ms) Jun 25 23:55:50.456: INFO: (15) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:160/proxy/: foo (200; 3.113935ms) Jun 25 23:55:50.456: INFO: (15) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:462/proxy/: tls qux (200; 3.908921ms) Jun 25 23:55:50.456: INFO: (15) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:1080/proxy/: ... (200; 4.272925ms) Jun 25 23:55:50.456: INFO: (15) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:162/proxy/: bar (200; 4.342397ms) Jun 25 23:55:50.456: INFO: (15) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:443/proxy/: test<... (200; 4.720397ms) Jun 25 23:55:50.457: INFO: (15) /api/v1/namespaces/proxy-4279/services/https:proxy-service-fzt84:tlsportname1/proxy/: tls baz (200; 4.507242ms) Jun 25 23:55:50.461: INFO: (16) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:460/proxy/: tls baz (200; 3.533944ms) Jun 25 23:55:50.461: INFO: (16) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6:162/proxy/: bar (200; 3.63115ms) Jun 25 23:55:50.461: INFO: (16) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:443/proxy/: ... (200; 4.279106ms) Jun 25 23:55:50.462: INFO: (16) /api/v1/namespaces/proxy-4279/services/http:proxy-service-fzt84:portname1/proxy/: foo (200; 4.288137ms) Jun 25 23:55:50.462: INFO: (16) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:160/proxy/: foo (200; 4.575904ms) Jun 25 23:55:50.462: INFO: (16) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:162/proxy/: bar (200; 4.469027ms) Jun 25 23:55:50.462: INFO: (16) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6:1080/proxy/: test<... (200; 4.522408ms) Jun 25 23:55:50.462: INFO: (16) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6/proxy/: test (200; 4.417569ms) Jun 25 23:55:50.462: INFO: (16) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6:160/proxy/: foo (200; 4.564069ms) Jun 25 23:55:50.462: INFO: (16) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:462/proxy/: tls qux (200; 4.461365ms) Jun 25 23:55:50.466: INFO: (17) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6/proxy/: test (200; 3.378454ms) Jun 25 23:55:50.466: INFO: (17) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:160/proxy/: foo (200; 3.417207ms) Jun 25 23:55:50.466: INFO: (17) /api/v1/namespaces/proxy-4279/services/proxy-service-fzt84:portname2/proxy/: bar (200; 3.491628ms) Jun 25 23:55:50.466: INFO: (17) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:162/proxy/: bar (200; 3.503729ms) Jun 25 23:55:50.466: INFO: (17) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6:162/proxy/: bar (200; 3.77261ms) Jun 25 23:55:50.466: INFO: (17) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6:160/proxy/: foo (200; 3.642369ms) Jun 25 23:55:50.466: INFO: (17) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:462/proxy/: tls qux (200; 3.818305ms) Jun 25 23:55:50.467: INFO: (17) /api/v1/namespaces/proxy-4279/services/proxy-service-fzt84:portname1/proxy/: foo (200; 4.470358ms) Jun 25 23:55:50.467: INFO: (17) /api/v1/namespaces/proxy-4279/services/http:proxy-service-fzt84:portname1/proxy/: foo (200; 4.552434ms) Jun 25 23:55:50.467: INFO: (17) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:1080/proxy/: ... (200; 4.621245ms) Jun 25 23:55:50.467: INFO: (17) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6:1080/proxy/: test<... (200; 4.681883ms) Jun 25 23:55:50.467: INFO: (17) /api/v1/namespaces/proxy-4279/services/https:proxy-service-fzt84:tlsportname2/proxy/: tls qux (200; 4.768853ms) Jun 25 23:55:50.467: INFO: (17) /api/v1/namespaces/proxy-4279/services/http:proxy-service-fzt84:portname2/proxy/: bar (200; 4.800797ms) Jun 25 23:55:50.467: INFO: (17) /api/v1/namespaces/proxy-4279/services/https:proxy-service-fzt84:tlsportname1/proxy/: tls baz (200; 4.745001ms) Jun 25 23:55:50.467: INFO: (17) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:460/proxy/: tls baz (200; 4.756723ms) Jun 25 23:55:50.467: INFO: (17) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:443/proxy/: test (200; 2.775346ms) Jun 25 23:55:50.470: INFO: (18) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:1080/proxy/: ... (200; 2.794209ms) Jun 25 23:55:50.470: INFO: (18) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6:1080/proxy/: test<... (200; 3.034395ms) Jun 25 23:55:50.471: INFO: (18) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6:160/proxy/: foo (200; 3.711046ms) Jun 25 23:55:50.471: INFO: (18) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6:162/proxy/: bar (200; 4.027253ms) Jun 25 23:55:50.471: INFO: (18) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:443/proxy/: test<... (200; 4.646469ms) Jun 25 23:55:50.477: INFO: (19) /api/v1/namespaces/proxy-4279/services/proxy-service-fzt84:portname1/proxy/: foo (200; 4.635072ms) Jun 25 23:55:50.477: INFO: (19) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6:160/proxy/: foo (200; 4.598569ms) Jun 25 23:55:50.477: INFO: (19) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:460/proxy/: tls baz (200; 4.711527ms) Jun 25 23:55:50.477: INFO: (19) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6/proxy/: test (200; 4.692748ms) Jun 25 23:55:50.477: INFO: (19) /api/v1/namespaces/proxy-4279/pods/proxy-service-fzt84-7jlz6:162/proxy/: bar (200; 4.670468ms) Jun 25 23:55:50.477: INFO: (19) /api/v1/namespaces/proxy-4279/services/https:proxy-service-fzt84:tlsportname1/proxy/: tls baz (200; 4.829777ms) Jun 25 23:55:50.477: INFO: (19) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:443/proxy/: ... (200; 4.754899ms) Jun 25 23:55:50.478: INFO: (19) /api/v1/namespaces/proxy-4279/pods/http:proxy-service-fzt84-7jlz6:160/proxy/: foo (200; 4.804015ms) Jun 25 23:55:50.478: INFO: (19) /api/v1/namespaces/proxy-4279/services/http:proxy-service-fzt84:portname1/proxy/: foo (200; 4.888808ms) Jun 25 23:55:50.478: INFO: (19) /api/v1/namespaces/proxy-4279/pods/https:proxy-service-fzt84-7jlz6:462/proxy/: tls qux (200; 4.898905ms) STEP: deleting ReplicationController proxy-service-fzt84 in namespace proxy-4279, will wait for the garbage collector to delete the pods Jun 25 23:55:50.536: INFO: Deleting ReplicationController proxy-service-fzt84 took: 6.206968ms Jun 25 23:55:50.836: INFO: Terminating ReplicationController proxy-service-fzt84 pods took: 300.315583ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:55:53.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4279" for this suite. • [SLOW TEST:9.332 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":294,"completed":64,"skipped":903,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:55:53.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:56:09.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4844" for this suite. • [SLOW TEST:16.200 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":294,"completed":65,"skipped":904,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:56:09.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Jun 25 23:56:09.590: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:56:22.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2860" for this suite. • [SLOW TEST:13.450 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":294,"completed":66,"skipped":947,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:56:23.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 25 23:56:23.114: INFO: Waiting up to 5m0s for pod "pod-b9ba3a6e-f7cd-4d08-90ec-5097b8b0c8c5" in namespace "emptydir-9063" to be "Succeeded or Failed" Jun 25 23:56:23.185: INFO: Pod "pod-b9ba3a6e-f7cd-4d08-90ec-5097b8b0c8c5": Phase="Pending", Reason="", readiness=false. Elapsed: 71.22831ms Jun 25 23:56:25.190: INFO: Pod "pod-b9ba3a6e-f7cd-4d08-90ec-5097b8b0c8c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075534357s Jun 25 23:56:27.194: INFO: Pod "pod-b9ba3a6e-f7cd-4d08-90ec-5097b8b0c8c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079827026s STEP: Saw pod success Jun 25 23:56:27.194: INFO: Pod "pod-b9ba3a6e-f7cd-4d08-90ec-5097b8b0c8c5" satisfied condition "Succeeded or Failed" Jun 25 23:56:27.197: INFO: Trying to get logs from node latest-worker2 pod pod-b9ba3a6e-f7cd-4d08-90ec-5097b8b0c8c5 container test-container: STEP: delete the pod Jun 25 23:56:27.238: INFO: Waiting for pod pod-b9ba3a6e-f7cd-4d08-90ec-5097b8b0c8c5 to disappear Jun 25 23:56:27.253: INFO: Pod pod-b9ba3a6e-f7cd-4d08-90ec-5097b8b0c8c5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:56:27.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9063" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":67,"skipped":947,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:56:27.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-08e1f8f4-4b13-4c27-81fc-d655dec1673c in namespace container-probe-3893 Jun 25 23:56:31.432: INFO: Started pod busybox-08e1f8f4-4b13-4c27-81fc-d655dec1673c in namespace container-probe-3893 STEP: checking the pod's current state and verifying that restartCount is present Jun 25 23:56:31.435: INFO: Initial restart count of pod busybox-08e1f8f4-4b13-4c27-81fc-d655dec1673c is 0 Jun 25 23:57:17.547: INFO: Restart count of pod container-probe-3893/busybox-08e1f8f4-4b13-4c27-81fc-d655dec1673c is now 1 (46.112466244s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:57:17.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3893" for this suite. • [SLOW TEST:50.347 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":294,"completed":68,"skipped":973,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:57:17.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:809 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-1818 STEP: creating service affinity-nodeport-transition in namespace services-1818 STEP: creating replication controller affinity-nodeport-transition in namespace services-1818 I0625 23:57:17.755118 8 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-1818, replica count: 3 I0625 23:57:20.805529 8 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0625 23:57:23.805882 8 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 25 23:57:23.817: INFO: Creating new exec pod Jun 25 23:57:28.852: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1818 execpod-affinityv4hxq -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' Jun 25 23:57:29.134: INFO: stderr: "I0625 23:57:29.006681 1239 log.go:172] (0xc0009e71e0) (0xc0009e8500) Create stream\nI0625 23:57:29.006734 1239 log.go:172] (0xc0009e71e0) (0xc0009e8500) Stream added, broadcasting: 1\nI0625 23:57:29.010764 1239 log.go:172] (0xc0009e71e0) Reply frame received for 1\nI0625 23:57:29.010800 1239 log.go:172] (0xc0009e71e0) (0xc00084c6e0) Create stream\nI0625 23:57:29.010811 1239 log.go:172] (0xc0009e71e0) (0xc00084c6e0) Stream added, broadcasting: 3\nI0625 23:57:29.011874 1239 log.go:172] (0xc0009e71e0) Reply frame received for 3\nI0625 23:57:29.011920 1239 log.go:172] (0xc0009e71e0) (0xc00084d680) Create stream\nI0625 23:57:29.012062 1239 log.go:172] (0xc0009e71e0) (0xc00084d680) Stream added, broadcasting: 5\nI0625 23:57:29.013025 1239 log.go:172] (0xc0009e71e0) Reply frame received for 5\nI0625 23:57:29.120936 1239 log.go:172] (0xc0009e71e0) Data frame received for 5\nI0625 23:57:29.120968 1239 log.go:172] (0xc00084d680) (5) Data frame handling\nI0625 23:57:29.121001 1239 log.go:172] (0xc00084d680) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nI0625 23:57:29.124112 1239 log.go:172] (0xc0009e71e0) Data frame received for 5\nI0625 23:57:29.124133 1239 log.go:172] (0xc00084d680) (5) Data frame handling\nI0625 23:57:29.124143 1239 log.go:172] (0xc00084d680) (5) Data frame sent\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0625 23:57:29.124557 1239 log.go:172] (0xc0009e71e0) Data frame received for 3\nI0625 23:57:29.124600 1239 log.go:172] (0xc00084c6e0) (3) Data frame handling\nI0625 23:57:29.124619 1239 log.go:172] (0xc0009e71e0) Data frame received for 5\nI0625 23:57:29.124627 1239 log.go:172] (0xc00084d680) (5) Data frame handling\nI0625 23:57:29.126410 1239 log.go:172] (0xc0009e71e0) Data frame received for 1\nI0625 23:57:29.126438 1239 log.go:172] (0xc0009e8500) (1) Data frame handling\nI0625 23:57:29.126458 1239 log.go:172] (0xc0009e8500) (1) Data frame sent\nI0625 23:57:29.126473 1239 log.go:172] (0xc0009e71e0) (0xc0009e8500) Stream removed, broadcasting: 1\nI0625 23:57:29.126490 1239 log.go:172] (0xc0009e71e0) Go away received\nI0625 23:57:29.127023 1239 log.go:172] (0xc0009e71e0) (0xc0009e8500) Stream removed, broadcasting: 1\nI0625 23:57:29.127055 1239 log.go:172] (0xc0009e71e0) (0xc00084c6e0) Stream removed, broadcasting: 3\nI0625 23:57:29.127069 1239 log.go:172] (0xc0009e71e0) (0xc00084d680) Stream removed, broadcasting: 5\n" Jun 25 23:57:29.134: INFO: stdout: "" Jun 25 23:57:29.135: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1818 execpod-affinityv4hxq -- /bin/sh -x -c nc -zv -t -w 2 10.105.69.44 80' Jun 25 23:57:29.321: INFO: stderr: "I0625 23:57:29.257007 1259 log.go:172] (0xc000ab3810) (0xc00085ed20) Create stream\nI0625 23:57:29.257095 1259 log.go:172] (0xc000ab3810) (0xc00085ed20) Stream added, broadcasting: 1\nI0625 23:57:29.262106 1259 log.go:172] (0xc000ab3810) Reply frame received for 1\nI0625 23:57:29.262144 1259 log.go:172] (0xc000ab3810) (0xc00085b220) Create stream\nI0625 23:57:29.262155 1259 log.go:172] (0xc000ab3810) (0xc00085b220) Stream added, broadcasting: 3\nI0625 23:57:29.262967 1259 log.go:172] (0xc000ab3810) Reply frame received for 3\nI0625 23:57:29.262999 1259 log.go:172] (0xc000ab3810) (0xc0008468c0) Create stream\nI0625 23:57:29.263007 1259 log.go:172] (0xc000ab3810) (0xc0008468c0) Stream added, broadcasting: 5\nI0625 23:57:29.263822 1259 log.go:172] (0xc000ab3810) Reply frame received for 5\nI0625 23:57:29.312230 1259 log.go:172] (0xc000ab3810) Data frame received for 5\nI0625 23:57:29.312273 1259 log.go:172] (0xc0008468c0) (5) Data frame handling\nI0625 23:57:29.312289 1259 log.go:172] (0xc0008468c0) (5) Data frame sent\nI0625 23:57:29.312301 1259 log.go:172] (0xc000ab3810) Data frame received for 5\nI0625 23:57:29.312312 1259 log.go:172] (0xc0008468c0) (5) Data frame handling\n+ nc -zv -t -w 2 10.105.69.44 80\nConnection to 10.105.69.44 80 port [tcp/http] succeeded!\nI0625 23:57:29.312341 1259 log.go:172] (0xc000ab3810) Data frame received for 3\nI0625 23:57:29.312355 1259 log.go:172] (0xc00085b220) (3) Data frame handling\nI0625 23:57:29.314312 1259 log.go:172] (0xc000ab3810) Data frame received for 1\nI0625 23:57:29.314356 1259 log.go:172] (0xc00085ed20) (1) Data frame handling\nI0625 23:57:29.314381 1259 log.go:172] (0xc00085ed20) (1) Data frame sent\nI0625 23:57:29.314411 1259 log.go:172] (0xc000ab3810) (0xc00085ed20) Stream removed, broadcasting: 1\nI0625 23:57:29.314446 1259 log.go:172] (0xc000ab3810) Go away received\nI0625 23:57:29.314910 1259 log.go:172] (0xc000ab3810) (0xc00085ed20) Stream removed, broadcasting: 1\nI0625 23:57:29.314936 1259 log.go:172] (0xc000ab3810) (0xc00085b220) Stream removed, broadcasting: 3\nI0625 23:57:29.314950 1259 log.go:172] (0xc000ab3810) (0xc0008468c0) Stream removed, broadcasting: 5\n" Jun 25 23:57:29.321: INFO: stdout: "" Jun 25 23:57:29.321: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1818 execpod-affinityv4hxq -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30980' Jun 25 23:57:29.558: INFO: stderr: "I0625 23:57:29.476555 1280 log.go:172] (0xc000c60e70) (0xc0002d7900) Create stream\nI0625 23:57:29.476606 1280 log.go:172] (0xc000c60e70) (0xc0002d7900) Stream added, broadcasting: 1\nI0625 23:57:29.479612 1280 log.go:172] (0xc000c60e70) Reply frame received for 1\nI0625 23:57:29.479666 1280 log.go:172] (0xc000c60e70) (0xc000432640) Create stream\nI0625 23:57:29.479683 1280 log.go:172] (0xc000c60e70) (0xc000432640) Stream added, broadcasting: 3\nI0625 23:57:29.480738 1280 log.go:172] (0xc000c60e70) Reply frame received for 3\nI0625 23:57:29.480774 1280 log.go:172] (0xc000c60e70) (0xc000528500) Create stream\nI0625 23:57:29.480787 1280 log.go:172] (0xc000c60e70) (0xc000528500) Stream added, broadcasting: 5\nI0625 23:57:29.482116 1280 log.go:172] (0xc000c60e70) Reply frame received for 5\nI0625 23:57:29.551084 1280 log.go:172] (0xc000c60e70) Data frame received for 5\nI0625 23:57:29.551126 1280 log.go:172] (0xc000528500) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 30980\nConnection to 172.17.0.13 30980 port [tcp/30980] succeeded!\nI0625 23:57:29.551144 1280 log.go:172] (0xc000c60e70) Data frame received for 3\nI0625 23:57:29.551171 1280 log.go:172] (0xc000432640) (3) Data frame handling\nI0625 23:57:29.551196 1280 log.go:172] (0xc000528500) (5) Data frame sent\nI0625 23:57:29.551207 1280 log.go:172] (0xc000c60e70) Data frame received for 5\nI0625 23:57:29.551217 1280 log.go:172] (0xc000528500) (5) Data frame handling\nI0625 23:57:29.553289 1280 log.go:172] (0xc000c60e70) Data frame received for 1\nI0625 23:57:29.553311 1280 log.go:172] (0xc0002d7900) (1) Data frame handling\nI0625 23:57:29.553327 1280 log.go:172] (0xc0002d7900) (1) Data frame sent\nI0625 23:57:29.553344 1280 log.go:172] (0xc000c60e70) (0xc0002d7900) Stream removed, broadcasting: 1\nI0625 23:57:29.553509 1280 log.go:172] (0xc000c60e70) Go away received\nI0625 23:57:29.553749 1280 log.go:172] (0xc000c60e70) (0xc0002d7900) Stream removed, broadcasting: 1\nI0625 23:57:29.553780 1280 log.go:172] (0xc000c60e70) (0xc000432640) Stream removed, broadcasting: 3\nI0625 23:57:29.553806 1280 log.go:172] (0xc000c60e70) (0xc000528500) Stream removed, broadcasting: 5\n" Jun 25 23:57:29.558: INFO: stdout: "" Jun 25 23:57:29.558: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1818 execpod-affinityv4hxq -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30980' Jun 25 23:57:29.791: INFO: stderr: "I0625 23:57:29.704872 1300 log.go:172] (0xc000bfc370) (0xc0004ea640) Create stream\nI0625 23:57:29.704923 1300 log.go:172] (0xc000bfc370) (0xc0004ea640) Stream added, broadcasting: 1\nI0625 23:57:29.707542 1300 log.go:172] (0xc000bfc370) Reply frame received for 1\nI0625 23:57:29.707578 1300 log.go:172] (0xc000bfc370) (0xc000292780) Create stream\nI0625 23:57:29.707590 1300 log.go:172] (0xc000bfc370) (0xc000292780) Stream added, broadcasting: 3\nI0625 23:57:29.708353 1300 log.go:172] (0xc000bfc370) Reply frame received for 3\nI0625 23:57:29.708382 1300 log.go:172] (0xc000bfc370) (0xc000293720) Create stream\nI0625 23:57:29.708392 1300 log.go:172] (0xc000bfc370) (0xc000293720) Stream added, broadcasting: 5\nI0625 23:57:29.709075 1300 log.go:172] (0xc000bfc370) Reply frame received for 5\nI0625 23:57:29.782162 1300 log.go:172] (0xc000bfc370) Data frame received for 5\nI0625 23:57:29.782191 1300 log.go:172] (0xc000293720) (5) Data frame handling\nI0625 23:57:29.782200 1300 log.go:172] (0xc000293720) (5) Data frame sent\nI0625 23:57:29.782205 1300 log.go:172] (0xc000bfc370) Data frame received for 5\nI0625 23:57:29.782209 1300 log.go:172] (0xc000293720) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30980\nConnection to 172.17.0.12 30980 port [tcp/30980] succeeded!\nI0625 23:57:29.782224 1300 log.go:172] (0xc000bfc370) Data frame received for 3\nI0625 23:57:29.782229 1300 log.go:172] (0xc000292780) (3) Data frame handling\nI0625 23:57:29.783445 1300 log.go:172] (0xc000bfc370) Data frame received for 1\nI0625 23:57:29.783467 1300 log.go:172] (0xc0004ea640) (1) Data frame handling\nI0625 23:57:29.783481 1300 log.go:172] (0xc0004ea640) (1) Data frame sent\nI0625 23:57:29.783525 1300 log.go:172] (0xc000bfc370) (0xc0004ea640) Stream removed, broadcasting: 1\nI0625 23:57:29.783676 1300 log.go:172] (0xc000bfc370) Go away received\nI0625 23:57:29.783805 1300 log.go:172] (0xc000bfc370) (0xc0004ea640) Stream removed, broadcasting: 1\nI0625 23:57:29.783819 1300 log.go:172] (0xc000bfc370) (0xc000292780) Stream removed, broadcasting: 3\nI0625 23:57:29.783825 1300 log.go:172] (0xc000bfc370) (0xc000293720) Stream removed, broadcasting: 5\n" Jun 25 23:57:29.791: INFO: stdout: "" Jun 25 23:57:29.797: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1818 execpod-affinityv4hxq -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:30980/ ; done' Jun 25 23:57:30.218: INFO: stderr: "I0625 23:57:29.950146 1320 log.go:172] (0xc000cbae70) (0xc00061dcc0) Create stream\nI0625 23:57:29.950203 1320 log.go:172] (0xc000cbae70) (0xc00061dcc0) Stream added, broadcasting: 1\nI0625 23:57:29.952244 1320 log.go:172] (0xc000cbae70) Reply frame received for 1\nI0625 23:57:29.952303 1320 log.go:172] (0xc000cbae70) (0xc0006da320) Create stream\nI0625 23:57:29.952320 1320 log.go:172] (0xc000cbae70) (0xc0006da320) Stream added, broadcasting: 3\nI0625 23:57:29.953740 1320 log.go:172] (0xc000cbae70) Reply frame received for 3\nI0625 23:57:29.953784 1320 log.go:172] (0xc000cbae70) (0xc000630460) Create stream\nI0625 23:57:29.953806 1320 log.go:172] (0xc000cbae70) (0xc000630460) Stream added, broadcasting: 5\nI0625 23:57:29.954754 1320 log.go:172] (0xc000cbae70) Reply frame received for 5\nI0625 23:57:30.024762 1320 log.go:172] (0xc000cbae70) Data frame received for 5\nI0625 23:57:30.024792 1320 log.go:172] (0xc000630460) (5) Data frame handling\nI0625 23:57:30.024818 1320 log.go:172] (0xc000630460) (5) Data frame sent\n+ seq 0 15\nI0625 23:57:30.027050 1320 log.go:172] (0xc000cbae70) Data frame received for 3\nI0625 23:57:30.027238 1320 log.go:172] (0xc0006da320) (3) Data frame handling\nI0625 23:57:30.027268 1320 log.go:172] (0xc0006da320) (3) Data frame sent\nI0625 23:57:30.027317 1320 log.go:172] (0xc000cbae70) Data frame received for 5\nI0625 23:57:30.027344 1320 log.go:172] (0xc000630460) (5) Data frame handling\nI0625 23:57:30.027367 1320 log.go:172] (0xc000630460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30980/\nI0625 23:57:30.120151 1320 log.go:172] (0xc000cbae70) Data frame received for 3\nI0625 23:57:30.120178 1320 log.go:172] (0xc0006da320) (3) Data frame handling\nI0625 23:57:30.120196 1320 log.go:172] (0xc0006da320) (3) Data frame sent\nI0625 23:57:30.120874 1320 log.go:172] (0xc000cbae70) Data frame received for 3\nI0625 23:57:30.120897 1320 log.go:172] (0xc0006da320) (3) Data frame handling\nI0625 23:57:30.120905 1320 log.go:172] (0xc0006da320) (3) Data frame sent\nI0625 23:57:30.120919 1320 log.go:172] (0xc000cbae70) Data frame received for 5\nI0625 23:57:30.120924 1320 log.go:172] (0xc000630460) (5) Data frame handling\nI0625 23:57:30.120930 1320 log.go:172] (0xc000630460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30980/\nI0625 23:57:30.127528 1320 log.go:172] (0xc000cbae70) Data frame received for 3\nI0625 23:57:30.127555 1320 log.go:172] (0xc0006da320) (3) Data frame handling\nI0625 23:57:30.127584 1320 log.go:172] (0xc0006da320) (3) Data frame sent\nI0625 23:57:30.128505 1320 log.go:172] (0xc000cbae70) Data frame received for 3\nI0625 23:57:30.128525 1320 log.go:172] (0xc0006da320) (3) Data frame handling\nI0625 23:57:30.128534 1320 log.go:172] (0xc0006da320) (3) Data frame sent\nI0625 23:57:30.128546 1320 log.go:172] (0xc000cbae70) Data frame received for 5\nI0625 23:57:30.128553 1320 log.go:172] (0xc000630460) (5) Data frame handling\nI0625 23:57:30.128560 1320 log.go:172] (0xc000630460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30980/\nI0625 23:57:30.135479 1320 log.go:172] (0xc000cbae70) Data frame received for 3\nI0625 23:57:30.135513 1320 log.go:172] (0xc0006da320) (3) Data frame handling\nI0625 23:57:30.135554 1320 log.go:172] (0xc0006da320) (3) Data frame sent\nI0625 23:57:30.135830 1320 log.go:172] (0xc000cbae70) Data frame received for 3\nI0625 23:57:30.135860 1320 log.go:172] (0xc0006da320) (3) Data frame handling\nI0625 23:57:30.135881 1320 log.go:172] (0xc0006da320) (3) Data frame sent\nI0625 23:57:30.135970 1320 log.go:172] (0xc000cbae70) Data frame received for 5\nI0625 23:57:30.135994 1320 log.go:172] (0xc000630460) (5) Data frame handling\nI0625 23:57:30.136014 1320 log.go:172] (0xc000630460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30980/\nI0625 23:57:30.142490 1320 log.go:172] (0xc000cbae70) Data frame received for 3\nI0625 23:57:30.142510 1320 log.go:172] (0xc0006da320) (3) Data frame handling\nI0625 23:57:30.142527 1320 log.go:172] (0xc0006da320) (3) Data frame sent\nI0625 23:57:30.142881 1320 log.go:172] (0xc000cbae70) Data frame received for 3\nI0625 23:57:30.142900 1320 log.go:172] (0xc0006da320) (3) Data frame handling\nI0625 23:57:30.142920 1320 log.go:172] (0xc0006da320) (3) Data frame sent\nI0625 23:57:30.142943 1320 log.go:172] (0xc000cbae70) Data frame received for 5\nI0625 23:57:30.142955 1320 log.go:172] (0xc000630460) (5) Data frame handling\nI0625 23:57:30.142964 1320 log.go:172] (0xc000630460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30980/\nI0625 23:57:30.146658 1320 log.go:172] (0xc000cbae70) Data frame received for 3\nI0625 23:57:30.146677 1320 log.go:172] (0xc0006da320) (3) Data frame handling\nI0625 23:57:30.146690 1320 log.go:172] (0xc0006da320) (3) Data frame sent\nI0625 23:57:30.147333 1320 log.go:172] (0xc000cbae70) Data frame received for 3\nI0625 23:57:30.147372 1320 log.go:172] (0xc0006da320) (3) Data frame handling\nI0625 23:57:30.147395 1320 log.go:172] (0xc0006da320) (3) Data frame sent\nI0625 23:57:30.147430 1320 log.go:172] (0xc000cbae70) Data frame received for 5\nI0625 23:57:30.147450 1320 log.go:172] (0xc000630460) (5) Data frame handling\nI0625 23:57:30.147476 1320 log.go:172] (0xc000630460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30980/\nI0625 23:57:30.151967 1320 log.go:172] (0xc000cbae70) Data frame received for 3\nI0625 23:57:30.151999 1320 log.go:172] (0xc0006da320) (3) Data frame handling\nI0625 23:57:30.152038 1320 log.go:172] (0xc0006da320) (3) Data frame sent\nI0625 23:57:30.152357 1320 log.go:172] (0xc000cbae70) Data frame received for 5\nI0625 23:57:30.152372 1320 log.go:172] (0xc000630460) (5) Data frame handling\nI0625 23:57:30.152382 1320 log.go:172] (0xc000630460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30980/\nI0625 23:57:30.152399 1320 log.go:172] (0xc000cbae70) Data frame received for 3\nI0625 23:57:30.152422 1320 log.go:172] (0xc0006da320) (3) Data frame handling\nI0625 23:57:30.152440 1320 log.go:172] (0xc0006da320) (3) Data frame sent\nI0625 23:57:30.160455 1320 log.go:172] (0xc000cbae70) Data frame received for 3\nI0625 23:57:30.160472 1320 log.go:172] (0xc0006da320) (3) Data frame handling\nI0625 23:57:30.160487 1320 log.go:172] (0xc0006da320) (3) Data frame sent\nI0625 23:57:30.161328 1320 log.go:172] (0xc000cbae70) Data frame received for 3\nI0625 23:57:30.161366 1320 log.go:172] (0xc0006da320) (3) Data frame handling\nI0625 23:57:30.161385 1320 log.go:172] (0xc0006da320) (3) Data frame sent\nI0625 23:57:30.161407 1320 log.go:172] (0xc000cbae70) Data frame received for 5\nI0625 23:57:30.161421 1320 log.go:172] (0xc000630460) (5) Data frame handling\nI0625 23:57:30.161443 1320 log.go:172] (0xc000630460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30980/\nI0625 23:57:30.166935 1320 log.go:172] (0xc000cbae70) Data frame received for 3\nI0625 23:57:30.166965 1320 log.go:172] (0xc0006da320) (3) Data frame handling\nI0625 23:57:30.166977 1320 log.go:172] (0xc0006da320) (3) Data frame sent\nI0625 23:57:30.167015 1320 log.go:172] (0xc000cbae70) Data frame received for 5\nI0625 23:57:30.167050 1320 log.go:172] (0xc000630460) (5) Data frame handling\nI0625 23:57:30.167079 1320 log.go:172] (0xc000630460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30980/\nI0625 23:57:30.167119 1320 log.go:172] (0xc000cbae70) Data frame received for 3\nI0625 23:57:30.167137 1320 log.go:172] (0xc0006da320) (3) Data frame handling\nI0625 23:57:30.167152 1320 log.go:172] (0xc0006da320) (3) Data frame sent\nI0625 23:57:30.170946 1320 log.go:172] (0xc000cbae70) Data frame received for 3\nI0625 23:57:30.170964 1320 log.go:172] (0xc0006da320) (3) Data frame handling\nI0625 23:57:30.170979 1320 log.go:172] (0xc0006da320) (3) Data frame sent\nI0625 23:57:30.171937 1320 log.go:172] (0xc000cbae70) Data frame received for 5\nI0625 23:57:30.171966 1320 log.go:172] (0xc000630460) (5) Data frame handling\nI0625 23:57:30.171994 1320 log.go:172] (0xc000630460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30980/\nI0625 23:57:30.172104 1320 log.go:172] (0xc000cbae70) Data frame received for 3\nI0625 23:57:30.172118 1320 log.go:172] (0xc0006da320) (3) Data frame handling\nI0625 23:57:30.172129 1320 log.go:172] (0xc0006da320) (3) Data frame sent\nI0625 23:57:30.177011 1320 log.go:172] (0xc000cbae70) Data frame received for 3\nI0625 23:57:30.177034 1320 log.go:172] (0xc0006da320) (3) Data frame handling\nI0625 23:57:30.177050 1320 log.go:172] (0xc0006da320) (3) Data frame sent\nI0625 23:57:30.177822 1320 log.go:172] (0xc000cbae70) Data frame received for 3\nI0625 23:57:30.177853 1320 log.go:172] (0xc0006da320) (3) Data frame handling\nI0625 23:57:30.177873 1320 log.go:172] (0xc0006da320) (3) Data frame sent\nI0625 23:57:30.177916 1320 log.go:172] (0xc000cbae70) Data frame received for 5\nI0625 23:57:30.177949 1320 log.go:172] (0xc000630460) (5) Data frame handling\nI0625 23:57:30.178104 1320 log.go:172] (0xc000630460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30980/\nI0625 23:57:30.183649 1320 log.go:172] (0xc000cbae70) Data frame received for 3\nI0625 23:57:30.183677 1320 log.go:172] (0xc0006da320) (3) Data frame handling\nI0625 23:57:30.183696 1320 log.go:172] (0xc0006da320) (3) Data frame sent\nI0625 23:57:30.184351 1320 log.go:172] (0xc000cbae70) Data frame received for 3\nI0625 23:57:30.184386 1320 log.go:172] (0xc0006da320) (3) Data frame handling\nI0625 23:57:30.184398 1320 log.go:172] (0xc0006da320) (3) Data frame sent\nI0625 23:57:30.184412 1320 log.go:172] (0xc000cbae70) Data frame received for 5\nI0625 23:57:30.184422 1320 log.go:172] (0xc000630460) (5) Data frame handling\nI0625 23:57:30.184447 1320 log.go:172] (0xc000630460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30980/\nI0625 23:57:30.189005 1320 log.go:172] (0xc000cbae70) Data frame received for 3\nI0625 23:57:30.189057 1320 log.go:172] (0xc0006da320) (3) Data frame handling\nI0625 23:57:30.189091 1320 log.go:172] (0xc0006da320) (3) Data frame sent\nI0625 23:57:30.189586 1320 log.go:172] (0xc000cbae70) Data frame received for 3\nI0625 23:57:30.189603 1320 log.go:172] (0xc0006da320) (3) Data frame handling\nI0625 23:57:30.189627 1320 log.go:172] (0xc0006da320) (3) Data frame sent\nI0625 23:57:30.189645 1320 log.go:172] (0xc000cbae70) Data frame received for 5\nI0625 23:57:30.189665 1320 log.go:172] (0xc000630460) (5) Data frame handling\nI0625 23:57:30.189678 1320 log.go:172] (0xc000630460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30980/\nI0625 23:57:30.192493 1320 log.go:172] (0xc000cbae70) Data frame received for 3\nI0625 23:57:30.192509 1320 log.go:172] (0xc0006da320) (3) Data frame handling\nI0625 23:57:30.192518 1320 log.go:172] (0xc0006da320) (3) Data frame sent\nI0625 23:57:30.192875 1320 log.go:172] (0xc000cbae70) Data frame received for 5\nI0625 23:57:30.192907 1320 log.go:172] (0xc000630460) (5) Data frame handling\nI0625 23:57:30.192921 1320 log.go:172] (0xc000630460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30980/\nI0625 23:57:30.192938 1320 log.go:172] (0xc000cbae70) Data frame received for 3\nI0625 23:57:30.192955 1320 log.go:172] (0xc0006da320) (3) Data frame handling\nI0625 23:57:30.192966 1320 log.go:172] (0xc0006da320) (3) Data frame sent\nI0625 23:57:30.198992 1320 log.go:172] (0xc000cbae70) Data frame received for 3\nI0625 23:57:30.199005 1320 log.go:172] (0xc0006da320) (3) Data frame handling\nI0625 23:57:30.199013 1320 log.go:172] (0xc0006da320) (3) Data frame sent\nI0625 23:57:30.199699 1320 log.go:172] (0xc000cbae70) Data frame received for 5\nI0625 23:57:30.199713 1320 log.go:172] (0xc000630460) (5) Data frame handling\nI0625 23:57:30.199719 1320 log.go:172] (0xc000630460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0625 23:57:30.199733 1320 log.go:172] (0xc000cbae70) Data frame received for 3\nI0625 23:57:30.199752 1320 log.go:172] (0xc0006da320) (3) Data frame handling\nI0625 23:57:30.199763 1320 log.go:172] (0xc0006da320) (3) Data frame sent\nI0625 23:57:30.199777 1320 log.go:172] (0xc000cbae70) Data frame received for 5\nI0625 23:57:30.199784 1320 log.go:172] (0xc000630460) (5) Data frame handling\nI0625 23:57:30.199799 1320 log.go:172] (0xc000630460) (5) Data frame sent\n http://172.17.0.13:30980/\nI0625 23:57:30.204464 1320 log.go:172] (0xc000cbae70) Data frame received for 3\nI0625 23:57:30.204480 1320 log.go:172] (0xc0006da320) (3) Data frame handling\nI0625 23:57:30.204491 1320 log.go:172] (0xc0006da320) (3) Data frame sent\nI0625 23:57:30.204897 1320 log.go:172] (0xc000cbae70) Data frame received for 5\nI0625 23:57:30.204924 1320 log.go:172] (0xc000630460) (5) Data frame handling\nI0625 23:57:30.204946 1320 log.go:172] (0xc000630460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30980/\nI0625 23:57:30.205378 1320 log.go:172] (0xc000cbae70) Data frame received for 3\nI0625 23:57:30.205397 1320 log.go:172] (0xc0006da320) (3) Data frame handling\nI0625 23:57:30.205415 1320 log.go:172] (0xc0006da320) (3) Data frame sent\nI0625 23:57:30.209017 1320 log.go:172] (0xc000cbae70) Data frame received for 3\nI0625 23:57:30.209038 1320 log.go:172] (0xc0006da320) (3) Data frame handling\nI0625 23:57:30.209052 1320 log.go:172] (0xc0006da320) (3) Data frame sent\nI0625 23:57:30.209780 1320 log.go:172] (0xc000cbae70) Data frame received for 5\nI0625 23:57:30.209795 1320 log.go:172] (0xc000630460) (5) Data frame handling\nI0625 23:57:30.209837 1320 log.go:172] (0xc000cbae70) Data frame received for 3\nI0625 23:57:30.209872 1320 log.go:172] (0xc0006da320) (3) Data frame handling\nI0625 23:57:30.211837 1320 log.go:172] (0xc000cbae70) Data frame received for 1\nI0625 23:57:30.211856 1320 log.go:172] (0xc00061dcc0) (1) Data frame handling\nI0625 23:57:30.211864 1320 log.go:172] (0xc00061dcc0) (1) Data frame sent\nI0625 23:57:30.211875 1320 log.go:172] (0xc000cbae70) (0xc00061dcc0) Stream removed, broadcasting: 1\nI0625 23:57:30.212013 1320 log.go:172] (0xc000cbae70) Go away received\nI0625 23:57:30.212264 1320 log.go:172] (0xc000cbae70) (0xc00061dcc0) Stream removed, broadcasting: 1\nI0625 23:57:30.212284 1320 log.go:172] (0xc000cbae70) (0xc0006da320) Stream removed, broadcasting: 3\nI0625 23:57:30.212296 1320 log.go:172] (0xc000cbae70) (0xc000630460) Stream removed, broadcasting: 5\n" Jun 25 23:57:30.219: INFO: stdout: "\naffinity-nodeport-transition-hptt7\naffinity-nodeport-transition-hptt7\naffinity-nodeport-transition-hptt7\naffinity-nodeport-transition-p52nk\naffinity-nodeport-transition-hptt7\naffinity-nodeport-transition-hptt7\naffinity-nodeport-transition-hptt7\naffinity-nodeport-transition-hptt7\naffinity-nodeport-transition-p52nk\naffinity-nodeport-transition-p52nk\naffinity-nodeport-transition-nkfg6\naffinity-nodeport-transition-nkfg6\naffinity-nodeport-transition-p52nk\naffinity-nodeport-transition-p52nk\naffinity-nodeport-transition-hptt7\naffinity-nodeport-transition-nkfg6" Jun 25 23:57:30.219: INFO: Received response from host: Jun 25 23:57:30.219: INFO: Received response from host: affinity-nodeport-transition-hptt7 Jun 25 23:57:30.219: INFO: Received response from host: affinity-nodeport-transition-hptt7 Jun 25 23:57:30.219: INFO: Received response from host: affinity-nodeport-transition-hptt7 Jun 25 23:57:30.219: INFO: Received response from host: affinity-nodeport-transition-p52nk Jun 25 23:57:30.219: INFO: Received response from host: affinity-nodeport-transition-hptt7 Jun 25 23:57:30.219: INFO: Received response from host: affinity-nodeport-transition-hptt7 Jun 25 23:57:30.219: INFO: Received response from host: affinity-nodeport-transition-hptt7 Jun 25 23:57:30.219: INFO: Received response from host: affinity-nodeport-transition-hptt7 Jun 25 23:57:30.219: INFO: Received response from host: affinity-nodeport-transition-p52nk Jun 25 23:57:30.219: INFO: Received response from host: affinity-nodeport-transition-p52nk Jun 25 23:57:30.219: INFO: Received response from host: affinity-nodeport-transition-nkfg6 Jun 25 23:57:30.219: INFO: Received response from host: affinity-nodeport-transition-nkfg6 Jun 25 23:57:30.219: INFO: Received response from host: affinity-nodeport-transition-p52nk Jun 25 23:57:30.219: INFO: Received response from host: affinity-nodeport-transition-p52nk Jun 25 23:57:30.219: INFO: Received response from host: affinity-nodeport-transition-hptt7 Jun 25 23:57:30.219: INFO: Received response from host: affinity-nodeport-transition-nkfg6 Jun 25 23:57:30.227: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1818 execpod-affinityv4hxq -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:30980/ ; done' Jun 25 23:57:30.555: INFO: stderr: "I0625 23:57:30.379326 1338 log.go:172] (0xc0005d4fd0) (0xc0006d1180) Create stream\nI0625 23:57:30.379371 1338 log.go:172] (0xc0005d4fd0) (0xc0006d1180) Stream added, broadcasting: 1\nI0625 23:57:30.382225 1338 log.go:172] (0xc0005d4fd0) Reply frame received for 1\nI0625 23:57:30.382258 1338 log.go:172] (0xc0005d4fd0) (0xc0006d1c20) Create stream\nI0625 23:57:30.382266 1338 log.go:172] (0xc0005d4fd0) (0xc0006d1c20) Stream added, broadcasting: 3\nI0625 23:57:30.383694 1338 log.go:172] (0xc0005d4fd0) Reply frame received for 3\nI0625 23:57:30.383746 1338 log.go:172] (0xc0005d4fd0) (0xc0006d4c80) Create stream\nI0625 23:57:30.383767 1338 log.go:172] (0xc0005d4fd0) (0xc0006d4c80) Stream added, broadcasting: 5\nI0625 23:57:30.384983 1338 log.go:172] (0xc0005d4fd0) Reply frame received for 5\nI0625 23:57:30.462073 1338 log.go:172] (0xc0005d4fd0) Data frame received for 5\nI0625 23:57:30.462105 1338 log.go:172] (0xc0006d4c80) (5) Data frame handling\nI0625 23:57:30.462118 1338 log.go:172] (0xc0006d4c80) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30980/\nI0625 23:57:30.462140 1338 log.go:172] (0xc0005d4fd0) Data frame received for 3\nI0625 23:57:30.462145 1338 log.go:172] (0xc0006d1c20) (3) Data frame handling\nI0625 23:57:30.462150 1338 log.go:172] (0xc0006d1c20) (3) Data frame sent\nI0625 23:57:30.467051 1338 log.go:172] (0xc0005d4fd0) Data frame received for 3\nI0625 23:57:30.467118 1338 log.go:172] (0xc0006d1c20) (3) Data frame handling\nI0625 23:57:30.467148 1338 log.go:172] (0xc0006d1c20) (3) Data frame sent\nI0625 23:57:30.467913 1338 log.go:172] (0xc0005d4fd0) Data frame received for 5\nI0625 23:57:30.467934 1338 log.go:172] (0xc0006d4c80) (5) Data frame handling\nI0625 23:57:30.467947 1338 log.go:172] (0xc0006d4c80) (5) Data frame sent\n+ I0625 23:57:30.468020 1338 log.go:172] (0xc0005d4fd0) Data frame received for 3\nI0625 23:57:30.468050 1338 log.go:172] (0xc0006d1c20) (3) Data frame handling\nI0625 23:57:30.468088 1338 log.go:172] (0xc0005d4fd0) Data frame received for 5\nI0625 23:57:30.468117 1338 log.go:172] (0xc0006d4c80) (5) Data frame handling\nI0625 23:57:30.468126 1338 log.go:172] (0xc0006d4c80) (5) Data frame sent\necho\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30980/\nI0625 23:57:30.468139 1338 log.go:172] (0xc0006d1c20) (3) Data frame sent\nI0625 23:57:30.474657 1338 log.go:172] (0xc0005d4fd0) Data frame received for 3\nI0625 23:57:30.474694 1338 log.go:172] (0xc0006d1c20) (3) Data frame handling\nI0625 23:57:30.474723 1338 log.go:172] (0xc0006d1c20) (3) Data frame sent\nI0625 23:57:30.475517 1338 log.go:172] (0xc0005d4fd0) Data frame received for 3\nI0625 23:57:30.475530 1338 log.go:172] (0xc0006d1c20) (3) Data frame handling\nI0625 23:57:30.475537 1338 log.go:172] (0xc0006d1c20) (3) Data frame sent\nI0625 23:57:30.475548 1338 log.go:172] (0xc0005d4fd0) Data frame received for 5\nI0625 23:57:30.475557 1338 log.go:172] (0xc0006d4c80) (5) Data frame handling\nI0625 23:57:30.475566 1338 log.go:172] (0xc0006d4c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30980/\nI0625 23:57:30.480670 1338 log.go:172] (0xc0005d4fd0) Data frame received for 3\nI0625 23:57:30.480702 1338 log.go:172] (0xc0006d1c20) (3) Data frame handling\nI0625 23:57:30.480734 1338 log.go:172] (0xc0006d1c20) (3) Data frame sent\nI0625 23:57:30.481347 1338 log.go:172] (0xc0005d4fd0) Data frame received for 3\nI0625 23:57:30.481369 1338 log.go:172] (0xc0006d1c20) (3) Data frame handling\nI0625 23:57:30.481388 1338 log.go:172] (0xc0006d1c20) (3) Data frame sent\nI0625 23:57:30.481407 1338 log.go:172] (0xc0005d4fd0) Data frame received for 5\nI0625 23:57:30.481429 1338 log.go:172] (0xc0006d4c80) (5) Data frame handling\nI0625 23:57:30.481447 1338 log.go:172] (0xc0006d4c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30980/\nI0625 23:57:30.485473 1338 log.go:172] (0xc0005d4fd0) Data frame received for 3\nI0625 23:57:30.485492 1338 log.go:172] (0xc0006d1c20) (3) Data frame handling\nI0625 23:57:30.485504 1338 log.go:172] (0xc0006d1c20) (3) Data frame sent\nI0625 23:57:30.485764 1338 log.go:172] (0xc0005d4fd0) Data frame received for 3\nI0625 23:57:30.485781 1338 log.go:172] (0xc0006d1c20) (3) Data frame handling\nI0625 23:57:30.485796 1338 log.go:172] (0xc0006d1c20) (3) Data frame sent\nI0625 23:57:30.485818 1338 log.go:172] (0xc0005d4fd0) Data frame received for 5\nI0625 23:57:30.485829 1338 log.go:172] (0xc0006d4c80) (5) Data frame handling\nI0625 23:57:30.485836 1338 log.go:172] (0xc0006d4c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30980/\nI0625 23:57:30.489438 1338 log.go:172] (0xc0005d4fd0) Data frame received for 3\nI0625 23:57:30.489459 1338 log.go:172] (0xc0006d1c20) (3) Data frame handling\nI0625 23:57:30.489483 1338 log.go:172] (0xc0006d1c20) (3) Data frame sent\nI0625 23:57:30.490452 1338 log.go:172] (0xc0005d4fd0) Data frame received for 5\nI0625 23:57:30.490475 1338 log.go:172] (0xc0006d4c80) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30980/\nI0625 23:57:30.490495 1338 log.go:172] (0xc0005d4fd0) Data frame received for 3\nI0625 23:57:30.490535 1338 log.go:172] (0xc0006d1c20) (3) Data frame handling\nI0625 23:57:30.490552 1338 log.go:172] (0xc0006d1c20) (3) Data frame sent\nI0625 23:57:30.490572 1338 log.go:172] (0xc0006d4c80) (5) Data frame sent\nI0625 23:57:30.494884 1338 log.go:172] (0xc0005d4fd0) Data frame received for 3\nI0625 23:57:30.494898 1338 log.go:172] (0xc0006d1c20) (3) Data frame handling\nI0625 23:57:30.494906 1338 log.go:172] (0xc0006d1c20) (3) Data frame sent\nI0625 23:57:30.495216 1338 log.go:172] (0xc0005d4fd0) Data frame received for 5\nI0625 23:57:30.495230 1338 log.go:172] (0xc0006d4c80) (5) Data frame handling\nI0625 23:57:30.495244 1338 log.go:172] (0xc0006d4c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30980/\nI0625 23:57:30.495380 1338 log.go:172] (0xc0005d4fd0) Data frame received for 3\nI0625 23:57:30.495395 1338 log.go:172] (0xc0006d1c20) (3) Data frame handling\nI0625 23:57:30.495408 1338 log.go:172] (0xc0006d1c20) (3) Data frame sent\nI0625 23:57:30.499601 1338 log.go:172] (0xc0005d4fd0) Data frame received for 3\nI0625 23:57:30.499614 1338 log.go:172] (0xc0006d1c20) (3) Data frame handling\nI0625 23:57:30.499626 1338 log.go:172] (0xc0006d1c20) (3) Data frame sent\nI0625 23:57:30.500260 1338 log.go:172] (0xc0005d4fd0) Data frame received for 3\nI0625 23:57:30.500290 1338 log.go:172] (0xc0006d1c20) (3) Data frame handling\nI0625 23:57:30.500303 1338 log.go:172] (0xc0006d1c20) (3) Data frame sent\nI0625 23:57:30.500316 1338 log.go:172] (0xc0005d4fd0) Data frame received for 5\nI0625 23:57:30.500324 1338 log.go:172] (0xc0006d4c80) (5) Data frame handling\nI0625 23:57:30.500332 1338 log.go:172] (0xc0006d4c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30980/\nI0625 23:57:30.504610 1338 log.go:172] (0xc0005d4fd0) Data frame received for 3\nI0625 23:57:30.504630 1338 log.go:172] (0xc0006d1c20) (3) Data frame handling\nI0625 23:57:30.504734 1338 log.go:172] (0xc0006d1c20) (3) Data frame sent\nI0625 23:57:30.505106 1338 log.go:172] (0xc0005d4fd0) Data frame received for 3\nI0625 23:57:30.505146 1338 log.go:172] (0xc0006d1c20) (3) Data frame handling\nI0625 23:57:30.505157 1338 log.go:172] (0xc0006d1c20) (3) Data frame sent\nI0625 23:57:30.505175 1338 log.go:172] (0xc0005d4fd0) Data frame received for 5\nI0625 23:57:30.505197 1338 log.go:172] (0xc0006d4c80) (5) Data frame handling\nI0625 23:57:30.505229 1338 log.go:172] (0xc0006d4c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30980/\nI0625 23:57:30.509858 1338 log.go:172] (0xc0005d4fd0) Data frame received for 3\nI0625 23:57:30.509880 1338 log.go:172] (0xc0006d1c20) (3) Data frame handling\nI0625 23:57:30.509912 1338 log.go:172] (0xc0006d1c20) (3) Data frame sent\nI0625 23:57:30.510374 1338 log.go:172] (0xc0005d4fd0) Data frame received for 5\nI0625 23:57:30.510412 1338 log.go:172] (0xc0006d4c80) (5) Data frame handling\nI0625 23:57:30.510430 1338 log.go:172] (0xc0006d4c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30980/\nI0625 23:57:30.510447 1338 log.go:172] (0xc0005d4fd0) Data frame received for 3\nI0625 23:57:30.510461 1338 log.go:172] (0xc0006d1c20) (3) Data frame handling\nI0625 23:57:30.510479 1338 log.go:172] (0xc0006d1c20) (3) Data frame sent\nI0625 23:57:30.514718 1338 log.go:172] (0xc0005d4fd0) Data frame received for 3\nI0625 23:57:30.514835 1338 log.go:172] (0xc0006d1c20) (3) Data frame handling\nI0625 23:57:30.514864 1338 log.go:172] (0xc0006d1c20) (3) Data frame sent\nI0625 23:57:30.515740 1338 log.go:172] (0xc0005d4fd0) Data frame received for 5\nI0625 23:57:30.515767 1338 log.go:172] (0xc0006d4c80) (5) Data frame handling\nI0625 23:57:30.515782 1338 log.go:172] (0xc0006d4c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30980/\nI0625 23:57:30.515801 1338 log.go:172] (0xc0005d4fd0) Data frame received for 3\nI0625 23:57:30.515821 1338 log.go:172] (0xc0006d1c20) (3) Data frame handling\nI0625 23:57:30.515835 1338 log.go:172] (0xc0006d1c20) (3) Data frame sent\nI0625 23:57:30.519949 1338 log.go:172] (0xc0005d4fd0) Data frame received for 3\nI0625 23:57:30.519983 1338 log.go:172] (0xc0006d1c20) (3) Data frame handling\nI0625 23:57:30.520017 1338 log.go:172] (0xc0006d1c20) (3) Data frame sent\nI0625 23:57:30.520739 1338 log.go:172] (0xc0005d4fd0) Data frame received for 5\nI0625 23:57:30.520760 1338 log.go:172] (0xc0006d4c80) (5) Data frame handling\nI0625 23:57:30.520776 1338 log.go:172] (0xc0006d4c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30980/\nI0625 23:57:30.521030 1338 log.go:172] (0xc0005d4fd0) Data frame received for 3\nI0625 23:57:30.521050 1338 log.go:172] (0xc0006d1c20) (3) Data frame handling\nI0625 23:57:30.521058 1338 log.go:172] (0xc0006d1c20) (3) Data frame sent\nI0625 23:57:30.529989 1338 log.go:172] (0xc0005d4fd0) Data frame received for 5\nI0625 23:57:30.530017 1338 log.go:172] (0xc0006d4c80) (5) Data frame handling\nI0625 23:57:30.530034 1338 log.go:172] (0xc0006d4c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30980/\nI0625 23:57:30.530121 1338 log.go:172] (0xc0005d4fd0) Data frame received for 3\nI0625 23:57:30.530151 1338 log.go:172] (0xc0006d1c20) (3) Data frame handling\nI0625 23:57:30.530179 1338 log.go:172] (0xc0006d1c20) (3) Data frame sent\nI0625 23:57:30.533819 1338 log.go:172] (0xc0005d4fd0) Data frame received for 3\nI0625 23:57:30.533839 1338 log.go:172] (0xc0006d1c20) (3) Data frame handling\nI0625 23:57:30.533859 1338 log.go:172] (0xc0006d1c20) (3) Data frame sent\nI0625 23:57:30.534245 1338 log.go:172] (0xc0005d4fd0) Data frame received for 3\nI0625 23:57:30.534267 1338 log.go:172] (0xc0006d1c20) (3) Data frame handling\nI0625 23:57:30.534276 1338 log.go:172] (0xc0006d1c20) (3) Data frame sent\nI0625 23:57:30.534291 1338 log.go:172] (0xc0005d4fd0) Data frame received for 5\nI0625 23:57:30.534296 1338 log.go:172] (0xc0006d4c80) (5) Data frame handling\nI0625 23:57:30.534304 1338 log.go:172] (0xc0006d4c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30980/\nI0625 23:57:30.538283 1338 log.go:172] (0xc0005d4fd0) Data frame received for 3\nI0625 23:57:30.538305 1338 log.go:172] (0xc0006d1c20) (3) Data frame handling\nI0625 23:57:30.538318 1338 log.go:172] (0xc0006d1c20) (3) Data frame sent\nI0625 23:57:30.538600 1338 log.go:172] (0xc0005d4fd0) Data frame received for 3\nI0625 23:57:30.538616 1338 log.go:172] (0xc0006d1c20) (3) Data frame handling\nI0625 23:57:30.538622 1338 log.go:172] (0xc0006d1c20) (3) Data frame sent\nI0625 23:57:30.538632 1338 log.go:172] (0xc0005d4fd0) Data frame received for 5\nI0625 23:57:30.538637 1338 log.go:172] (0xc0006d4c80) (5) Data frame handling\nI0625 23:57:30.538642 1338 log.go:172] (0xc0006d4c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30980/\nI0625 23:57:30.543259 1338 log.go:172] (0xc0005d4fd0) Data frame received for 3\nI0625 23:57:30.543273 1338 log.go:172] (0xc0006d1c20) (3) Data frame handling\nI0625 23:57:30.543284 1338 log.go:172] (0xc0006d1c20) (3) Data frame sent\nI0625 23:57:30.543786 1338 log.go:172] (0xc0005d4fd0) Data frame received for 3\nI0625 23:57:30.543798 1338 log.go:172] (0xc0006d1c20) (3) Data frame handling\nI0625 23:57:30.543803 1338 log.go:172] (0xc0006d1c20) (3) Data frame sent\nI0625 23:57:30.543813 1338 log.go:172] (0xc0005d4fd0) Data frame received for 5\nI0625 23:57:30.543817 1338 log.go:172] (0xc0006d4c80) (5) Data frame handling\nI0625 23:57:30.543822 1338 log.go:172] (0xc0006d4c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30980/\nI0625 23:57:30.547639 1338 log.go:172] (0xc0005d4fd0) Data frame received for 3\nI0625 23:57:30.547659 1338 log.go:172] (0xc0006d1c20) (3) Data frame handling\nI0625 23:57:30.547672 1338 log.go:172] (0xc0006d1c20) (3) Data frame sent\nI0625 23:57:30.548144 1338 log.go:172] (0xc0005d4fd0) Data frame received for 3\nI0625 23:57:30.548160 1338 log.go:172] (0xc0006d1c20) (3) Data frame handling\nI0625 23:57:30.548214 1338 log.go:172] (0xc0005d4fd0) Data frame received for 5\nI0625 23:57:30.548237 1338 log.go:172] (0xc0006d4c80) (5) Data frame handling\nI0625 23:57:30.549809 1338 log.go:172] (0xc0005d4fd0) Data frame received for 1\nI0625 23:57:30.549849 1338 log.go:172] (0xc0006d1180) (1) Data frame handling\nI0625 23:57:30.549868 1338 log.go:172] (0xc0006d1180) (1) Data frame sent\nI0625 23:57:30.549888 1338 log.go:172] (0xc0005d4fd0) (0xc0006d1180) Stream removed, broadcasting: 1\nI0625 23:57:30.549907 1338 log.go:172] (0xc0005d4fd0) Go away received\nI0625 23:57:30.550248 1338 log.go:172] (0xc0005d4fd0) (0xc0006d1180) Stream removed, broadcasting: 1\nI0625 23:57:30.550265 1338 log.go:172] (0xc0005d4fd0) (0xc0006d1c20) Stream removed, broadcasting: 3\nI0625 23:57:30.550272 1338 log.go:172] (0xc0005d4fd0) (0xc0006d4c80) Stream removed, broadcasting: 5\n" Jun 25 23:57:30.555: INFO: stdout: "\naffinity-nodeport-transition-hptt7\naffinity-nodeport-transition-hptt7\naffinity-nodeport-transition-hptt7\naffinity-nodeport-transition-hptt7\naffinity-nodeport-transition-hptt7\naffinity-nodeport-transition-hptt7\naffinity-nodeport-transition-hptt7\naffinity-nodeport-transition-hptt7\naffinity-nodeport-transition-hptt7\naffinity-nodeport-transition-hptt7\naffinity-nodeport-transition-hptt7\naffinity-nodeport-transition-hptt7\naffinity-nodeport-transition-hptt7\naffinity-nodeport-transition-hptt7\naffinity-nodeport-transition-hptt7\naffinity-nodeport-transition-hptt7" Jun 25 23:57:30.555: INFO: Received response from host: Jun 25 23:57:30.555: INFO: Received response from host: affinity-nodeport-transition-hptt7 Jun 25 23:57:30.555: INFO: Received response from host: affinity-nodeport-transition-hptt7 Jun 25 23:57:30.555: INFO: Received response from host: affinity-nodeport-transition-hptt7 Jun 25 23:57:30.555: INFO: Received response from host: affinity-nodeport-transition-hptt7 Jun 25 23:57:30.555: INFO: Received response from host: affinity-nodeport-transition-hptt7 Jun 25 23:57:30.555: INFO: Received response from host: affinity-nodeport-transition-hptt7 Jun 25 23:57:30.555: INFO: Received response from host: affinity-nodeport-transition-hptt7 Jun 25 23:57:30.555: INFO: Received response from host: affinity-nodeport-transition-hptt7 Jun 25 23:57:30.555: INFO: Received response from host: affinity-nodeport-transition-hptt7 Jun 25 23:57:30.555: INFO: Received response from host: affinity-nodeport-transition-hptt7 Jun 25 23:57:30.555: INFO: Received response from host: affinity-nodeport-transition-hptt7 Jun 25 23:57:30.555: INFO: Received response from host: affinity-nodeport-transition-hptt7 Jun 25 23:57:30.555: INFO: Received response from host: affinity-nodeport-transition-hptt7 Jun 25 23:57:30.555: INFO: Received response from host: affinity-nodeport-transition-hptt7 Jun 25 23:57:30.555: INFO: Received response from host: affinity-nodeport-transition-hptt7 Jun 25 23:57:30.555: INFO: Received response from host: affinity-nodeport-transition-hptt7 Jun 25 23:57:30.555: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-1818, will wait for the garbage collector to delete the pods Jun 25 23:57:30.894: INFO: Deleting ReplicationController affinity-nodeport-transition took: 202.235948ms Jun 25 23:57:31.394: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 500.303232ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:57:45.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1818" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:813 • [SLOW TEST:27.768 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":294,"completed":69,"skipped":987,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:57:45.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-1126d16b-6947-48bf-b55f-62e821e12038 in namespace container-probe-4083 Jun 25 23:57:49.444: INFO: Started pod liveness-1126d16b-6947-48bf-b55f-62e821e12038 in namespace container-probe-4083 STEP: checking the pod's current state and verifying that restartCount is present Jun 25 23:57:49.448: INFO: Initial restart count of pod liveness-1126d16b-6947-48bf-b55f-62e821e12038 is 0 Jun 25 23:58:10.282: INFO: Restart count of pod container-probe-4083/liveness-1126d16b-6947-48bf-b55f-62e821e12038 is now 1 (20.834699432s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:58:10.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4083" for this suite. • [SLOW TEST:24.972 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":294,"completed":70,"skipped":1005,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:58:10.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 25 23:58:10.907: INFO: The status of Pod test-webserver-604b9e8f-ae57-4d1a-adaa-918ab19b6a9d is Pending, waiting for it to be Running (with Ready = true) Jun 25 23:58:12.911: INFO: The status of Pod test-webserver-604b9e8f-ae57-4d1a-adaa-918ab19b6a9d is Pending, waiting for it to be Running (with Ready = true) Jun 25 23:58:14.911: INFO: The status of Pod test-webserver-604b9e8f-ae57-4d1a-adaa-918ab19b6a9d is Running (Ready = false) Jun 25 23:58:16.911: INFO: The status of Pod test-webserver-604b9e8f-ae57-4d1a-adaa-918ab19b6a9d is Running (Ready = false) Jun 25 23:58:18.911: INFO: The status of Pod test-webserver-604b9e8f-ae57-4d1a-adaa-918ab19b6a9d is Running (Ready = false) Jun 25 23:58:20.911: INFO: The status of Pod test-webserver-604b9e8f-ae57-4d1a-adaa-918ab19b6a9d is Running (Ready = false) Jun 25 23:58:22.915: INFO: The status of Pod test-webserver-604b9e8f-ae57-4d1a-adaa-918ab19b6a9d is Running (Ready = false) Jun 25 23:58:24.914: INFO: The status of Pod test-webserver-604b9e8f-ae57-4d1a-adaa-918ab19b6a9d is Running (Ready = false) Jun 25 23:58:26.912: INFO: The status of Pod test-webserver-604b9e8f-ae57-4d1a-adaa-918ab19b6a9d is Running (Ready = false) Jun 25 23:58:28.911: INFO: The status of Pod test-webserver-604b9e8f-ae57-4d1a-adaa-918ab19b6a9d is Running (Ready = false) Jun 25 23:58:30.911: INFO: The status of Pod test-webserver-604b9e8f-ae57-4d1a-adaa-918ab19b6a9d is Running (Ready = false) Jun 25 23:58:32.911: INFO: The status of Pod test-webserver-604b9e8f-ae57-4d1a-adaa-918ab19b6a9d is Running (Ready = false) Jun 25 23:58:34.911: INFO: The status of Pod test-webserver-604b9e8f-ae57-4d1a-adaa-918ab19b6a9d is Running (Ready = false) Jun 25 23:58:36.911: INFO: The status of Pod test-webserver-604b9e8f-ae57-4d1a-adaa-918ab19b6a9d is Running (Ready = true) Jun 25 23:58:36.914: INFO: Container started at 2020-06-25 23:58:13 +0000 UTC, pod became ready at 2020-06-25 23:58:36 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:58:36.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4251" for this suite. • [SLOW TEST:26.571 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":294,"completed":71,"skipped":1050,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:58:36.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 25 23:58:37.061: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jun 25 23:58:37.070: INFO: Number of nodes with available pods: 0 Jun 25 23:58:37.070: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jun 25 23:58:37.170: INFO: Number of nodes with available pods: 0 Jun 25 23:58:37.170: INFO: Node latest-worker is running more than one daemon pod Jun 25 23:58:38.175: INFO: Number of nodes with available pods: 0 Jun 25 23:58:38.175: INFO: Node latest-worker is running more than one daemon pod Jun 25 23:58:39.175: INFO: Number of nodes with available pods: 0 Jun 25 23:58:39.175: INFO: Node latest-worker is running more than one daemon pod Jun 25 23:58:40.174: INFO: Number of nodes with available pods: 0 Jun 25 23:58:40.174: INFO: Node latest-worker is running more than one daemon pod Jun 25 23:58:41.174: INFO: Number of nodes with available pods: 1 Jun 25 23:58:41.174: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jun 25 23:58:41.206: INFO: Number of nodes with available pods: 1 Jun 25 23:58:41.206: INFO: Number of running nodes: 0, number of available pods: 1 Jun 25 23:58:42.251: INFO: Number of nodes with available pods: 0 Jun 25 23:58:42.251: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jun 25 23:58:42.479: INFO: Number of nodes with available pods: 0 Jun 25 23:58:42.479: INFO: Node latest-worker is running more than one daemon pod Jun 25 23:58:43.483: INFO: Number of nodes with available pods: 0 Jun 25 23:58:43.483: INFO: Node latest-worker is running more than one daemon pod Jun 25 23:58:44.484: INFO: Number of nodes with available pods: 0 Jun 25 23:58:44.484: INFO: Node latest-worker is running more than one daemon pod Jun 25 23:58:45.483: INFO: Number of nodes with available pods: 0 Jun 25 23:58:45.483: INFO: Node latest-worker is running more than one daemon pod Jun 25 23:58:46.484: INFO: Number of nodes with available pods: 0 Jun 25 23:58:46.484: INFO: Node latest-worker is running more than one daemon pod Jun 25 23:58:47.482: INFO: Number of nodes with available pods: 0 Jun 25 23:58:47.482: INFO: Node latest-worker is running more than one daemon pod Jun 25 23:58:48.484: INFO: Number of nodes with available pods: 1 Jun 25 23:58:48.484: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6494, will wait for the garbage collector to delete the pods Jun 25 23:58:48.550: INFO: Deleting DaemonSet.extensions daemon-set took: 7.274275ms Jun 25 23:58:48.850: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.244969ms Jun 25 23:58:54.953: INFO: Number of nodes with available pods: 0 Jun 25 23:58:54.953: INFO: Number of running nodes: 0, number of available pods: 0 Jun 25 23:58:54.956: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6494/daemonsets","resourceVersion":"15907785"},"items":null} Jun 25 23:58:54.959: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6494/pods","resourceVersion":"15907785"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:58:55.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6494" for this suite. • [SLOW TEST:18.104 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":294,"completed":72,"skipped":1053,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:58:55.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jun 25 23:59:03.146: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 25 23:59:03.149: INFO: Pod pod-with-prestop-http-hook still exists Jun 25 23:59:05.150: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 25 23:59:05.154: INFO: Pod pod-with-prestop-http-hook still exists Jun 25 23:59:07.150: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 25 23:59:07.155: INFO: Pod pod-with-prestop-http-hook still exists Jun 25 23:59:09.150: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 25 23:59:09.155: INFO: Pod pod-with-prestop-http-hook still exists Jun 25 23:59:11.150: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 25 23:59:11.156: INFO: Pod pod-with-prestop-http-hook still exists Jun 25 23:59:13.150: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 25 23:59:13.154: INFO: Pod pod-with-prestop-http-hook still exists Jun 25 23:59:15.150: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 25 23:59:15.152: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:59:15.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1735" for this suite. • [SLOW TEST:20.151 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":294,"completed":73,"skipped":1057,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:59:15.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 25 23:59:20.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6814" for this suite. • [SLOW TEST:5.032 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":294,"completed":74,"skipped":1065,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 25 23:59:20.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 26 00:01:20.386: INFO: Deleting pod "var-expansion-3d532305-4337-4aa1-84b4-0faf916ac026" in namespace "var-expansion-7683" Jun 26 00:01:20.390: INFO: Wait up to 5m0s for pod "var-expansion-3d532305-4337-4aa1-84b4-0faf916ac026" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:01:22.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7683" for this suite. • [SLOW TEST:122.240 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":294,"completed":75,"skipped":1089,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:01:22.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-fb228650-3f98-4c1d-a6d1-fa7a0726d31f STEP: Creating a pod to test consume configMaps Jun 26 00:01:22.567: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-135aa1c8-99a9-4f24-90dd-766cfe66271a" in namespace "projected-9124" to be "Succeeded or Failed" Jun 26 00:01:22.623: INFO: Pod "pod-projected-configmaps-135aa1c8-99a9-4f24-90dd-766cfe66271a": Phase="Pending", Reason="", readiness=false. Elapsed: 56.42028ms Jun 26 00:01:24.683: INFO: Pod "pod-projected-configmaps-135aa1c8-99a9-4f24-90dd-766cfe66271a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116010291s Jun 26 00:01:26.687: INFO: Pod "pod-projected-configmaps-135aa1c8-99a9-4f24-90dd-766cfe66271a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.120543615s STEP: Saw pod success Jun 26 00:01:26.687: INFO: Pod "pod-projected-configmaps-135aa1c8-99a9-4f24-90dd-766cfe66271a" satisfied condition "Succeeded or Failed" Jun 26 00:01:26.691: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-135aa1c8-99a9-4f24-90dd-766cfe66271a container projected-configmap-volume-test: STEP: delete the pod Jun 26 00:01:26.747: INFO: Waiting for pod pod-projected-configmaps-135aa1c8-99a9-4f24-90dd-766cfe66271a to disappear Jun 26 00:01:26.769: INFO: Pod pod-projected-configmaps-135aa1c8-99a9-4f24-90dd-766cfe66271a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:01:26.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9124" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":294,"completed":76,"skipped":1098,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:01:26.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:01:43.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8670" for this suite. • [SLOW TEST:16.249 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":294,"completed":77,"skipped":1132,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:01:43.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 26 00:01:47.172: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:01:47.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9116" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":294,"completed":78,"skipped":1136,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:01:47.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-1609 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Jun 26 00:01:47.345: INFO: Found 0 stateful pods, waiting for 3 Jun 26 00:01:57.349: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 26 00:01:57.349: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 26 00:01:57.349: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jun 26 00:02:07.350: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 26 00:02:07.350: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 26 00:02:07.350: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jun 26 00:02:07.377: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jun 26 00:02:17.436: INFO: Updating stateful set ss2 Jun 26 00:02:17.471: INFO: Waiting for Pod statefulset-1609/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Jun 26 00:02:28.083: INFO: Found 2 stateful pods, waiting for 3 Jun 26 00:02:38.089: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 26 00:02:38.090: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 26 00:02:38.090: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jun 26 00:02:38.113: INFO: Updating stateful set ss2 Jun 26 00:02:38.162: INFO: Waiting for Pod statefulset-1609/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jun 26 00:02:48.189: INFO: Updating stateful set ss2 Jun 26 00:02:48.272: INFO: Waiting for StatefulSet statefulset-1609/ss2 to complete update Jun 26 00:02:48.272: INFO: Waiting for Pod statefulset-1609/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jun 26 00:02:58.287: INFO: Waiting for StatefulSet statefulset-1609/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jun 26 00:03:08.282: INFO: Deleting all statefulset in ns statefulset-1609 Jun 26 00:03:08.284: INFO: Scaling statefulset ss2 to 0 Jun 26 00:03:28.304: INFO: Waiting for statefulset status.replicas updated to 0 Jun 26 00:03:28.308: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:03:28.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1609" for this suite. • [SLOW TEST:101.141 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":294,"completed":79,"skipped":1212,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:03:28.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 26 00:03:28.500: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e03452aa-0530-420e-a52f-4fa7939f89e9" in namespace "downward-api-1143" to be "Succeeded or Failed" Jun 26 00:03:28.519: INFO: Pod "downwardapi-volume-e03452aa-0530-420e-a52f-4fa7939f89e9": Phase="Pending", Reason="", readiness=false. Elapsed: 19.03248ms Jun 26 00:03:30.650: INFO: Pod "downwardapi-volume-e03452aa-0530-420e-a52f-4fa7939f89e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150186452s Jun 26 00:03:32.654: INFO: Pod "downwardapi-volume-e03452aa-0530-420e-a52f-4fa7939f89e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.154054433s STEP: Saw pod success Jun 26 00:03:32.654: INFO: Pod "downwardapi-volume-e03452aa-0530-420e-a52f-4fa7939f89e9" satisfied condition "Succeeded or Failed" Jun 26 00:03:32.658: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-e03452aa-0530-420e-a52f-4fa7939f89e9 container client-container: STEP: delete the pod Jun 26 00:03:32.867: INFO: Waiting for pod downwardapi-volume-e03452aa-0530-420e-a52f-4fa7939f89e9 to disappear Jun 26 00:03:32.880: INFO: Pod downwardapi-volume-e03452aa-0530-420e-a52f-4fa7939f89e9 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:03:32.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1143" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":294,"completed":80,"skipped":1215,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:03:32.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:03:39.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8190" for this suite. • [SLOW TEST:7.104 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":294,"completed":81,"skipped":1216,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:03:39.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 26 00:03:40.073: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-46e60831-ce85-4bfd-9ab1-8328b4b107f2" in namespace "security-context-test-4997" to be "Succeeded or Failed" Jun 26 00:03:40.078: INFO: Pod "busybox-readonly-false-46e60831-ce85-4bfd-9ab1-8328b4b107f2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.934734ms Jun 26 00:03:42.082: INFO: Pod "busybox-readonly-false-46e60831-ce85-4bfd-9ab1-8328b4b107f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009295334s Jun 26 00:03:44.087: INFO: Pod "busybox-readonly-false-46e60831-ce85-4bfd-9ab1-8328b4b107f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01419283s Jun 26 00:03:44.087: INFO: Pod "busybox-readonly-false-46e60831-ce85-4bfd-9ab1-8328b4b107f2" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:03:44.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4997" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":294,"completed":82,"skipped":1224,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:03:44.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 26 00:03:44.188: INFO: Waiting up to 5m0s for pod "downwardapi-volume-303d4577-f580-4b7d-a2c3-5bebd50e1012" in namespace "downward-api-7982" to be "Succeeded or Failed" Jun 26 00:03:44.203: INFO: Pod "downwardapi-volume-303d4577-f580-4b7d-a2c3-5bebd50e1012": Phase="Pending", Reason="", readiness=false. Elapsed: 14.934719ms Jun 26 00:03:46.217: INFO: Pod "downwardapi-volume-303d4577-f580-4b7d-a2c3-5bebd50e1012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029613745s Jun 26 00:03:48.222: INFO: Pod "downwardapi-volume-303d4577-f580-4b7d-a2c3-5bebd50e1012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034429901s STEP: Saw pod success Jun 26 00:03:48.222: INFO: Pod "downwardapi-volume-303d4577-f580-4b7d-a2c3-5bebd50e1012" satisfied condition "Succeeded or Failed" Jun 26 00:03:48.226: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-303d4577-f580-4b7d-a2c3-5bebd50e1012 container client-container: STEP: delete the pod Jun 26 00:03:48.253: INFO: Waiting for pod downwardapi-volume-303d4577-f580-4b7d-a2c3-5bebd50e1012 to disappear Jun 26 00:03:48.311: INFO: Pod downwardapi-volume-303d4577-f580-4b7d-a2c3-5bebd50e1012 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:03:48.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7982" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":83,"skipped":1227,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:03:48.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jun 26 00:03:49.162: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jun 26 00:03:51.503: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728726629, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728726629, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728726629, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728726629, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 26 00:03:54.587: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 26 00:03:54.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:03:55.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-712" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.701 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":294,"completed":84,"skipped":1237,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:03:56.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-7953aef1-f86a-4c4b-a6eb-aab684757525 in namespace container-probe-1697 Jun 26 00:04:00.109: INFO: Started pod busybox-7953aef1-f86a-4c4b-a6eb-aab684757525 in namespace container-probe-1697 STEP: checking the pod's current state and verifying that restartCount is present Jun 26 00:04:00.112: INFO: Initial restart count of pod busybox-7953aef1-f86a-4c4b-a6eb-aab684757525 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:08:00.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1697" for this suite. • [SLOW TEST:244.834 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":294,"completed":85,"skipped":1255,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:08:00.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 26 00:08:05.468: INFO: Successfully updated pod "pod-update-f2673a2c-362e-4d18-9fd4-44c7a5575f04" STEP: verifying the updated pod is in kubernetes Jun 26 00:08:05.492: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:08:05.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5105" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":294,"completed":86,"skipped":1286,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:08:05.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy Jun 26 00:08:05.547: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix300098336/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:08:05.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-529" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":294,"completed":87,"skipped":1292,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:08:05.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-5mr9 STEP: Creating a pod to test atomic-volume-subpath Jun 26 00:08:05.711: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-5mr9" in namespace "subpath-6234" to be "Succeeded or Failed" Jun 26 00:08:05.752: INFO: Pod "pod-subpath-test-configmap-5mr9": Phase="Pending", Reason="", readiness=false. Elapsed: 41.564284ms Jun 26 00:08:07.908: INFO: Pod "pod-subpath-test-configmap-5mr9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197524599s Jun 26 00:08:09.913: INFO: Pod "pod-subpath-test-configmap-5mr9": Phase="Running", Reason="", readiness=true. Elapsed: 4.202290174s Jun 26 00:08:11.917: INFO: Pod "pod-subpath-test-configmap-5mr9": Phase="Running", Reason="", readiness=true. Elapsed: 6.206678805s Jun 26 00:08:13.927: INFO: Pod "pod-subpath-test-configmap-5mr9": Phase="Running", Reason="", readiness=true. Elapsed: 8.21592835s Jun 26 00:08:15.929: INFO: Pod "pod-subpath-test-configmap-5mr9": Phase="Running", Reason="", readiness=true. Elapsed: 10.218744032s Jun 26 00:08:17.933: INFO: Pod "pod-subpath-test-configmap-5mr9": Phase="Running", Reason="", readiness=true. Elapsed: 12.222229596s Jun 26 00:08:19.980: INFO: Pod "pod-subpath-test-configmap-5mr9": Phase="Running", Reason="", readiness=true. Elapsed: 14.269288818s Jun 26 00:08:22.064: INFO: Pod "pod-subpath-test-configmap-5mr9": Phase="Running", Reason="", readiness=true. Elapsed: 16.353307201s Jun 26 00:08:24.069: INFO: Pod "pod-subpath-test-configmap-5mr9": Phase="Running", Reason="", readiness=true. Elapsed: 18.357865383s Jun 26 00:08:26.073: INFO: Pod "pod-subpath-test-configmap-5mr9": Phase="Running", Reason="", readiness=true. Elapsed: 20.361854009s Jun 26 00:08:28.076: INFO: Pod "pod-subpath-test-configmap-5mr9": Phase="Running", Reason="", readiness=true. Elapsed: 22.365603462s Jun 26 00:08:30.080: INFO: Pod "pod-subpath-test-configmap-5mr9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.369486865s STEP: Saw pod success Jun 26 00:08:30.080: INFO: Pod "pod-subpath-test-configmap-5mr9" satisfied condition "Succeeded or Failed" Jun 26 00:08:30.084: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-5mr9 container test-container-subpath-configmap-5mr9: STEP: delete the pod Jun 26 00:08:30.145: INFO: Waiting for pod pod-subpath-test-configmap-5mr9 to disappear Jun 26 00:08:30.178: INFO: Pod pod-subpath-test-configmap-5mr9 no longer exists STEP: Deleting pod pod-subpath-test-configmap-5mr9 Jun 26 00:08:30.178: INFO: Deleting pod "pod-subpath-test-configmap-5mr9" in namespace "subpath-6234" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:08:30.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6234" for this suite. • [SLOW TEST:24.597 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":294,"completed":88,"skipped":1324,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:08:30.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jun 26 00:08:38.139: INFO: 8 pods remaining Jun 26 00:08:38.139: INFO: 0 pods has nil DeletionTimestamp Jun 26 00:08:38.139: INFO: Jun 26 00:08:39.973: INFO: 0 pods remaining Jun 26 00:08:39.973: INFO: 0 pods has nil DeletionTimestamp Jun 26 00:08:39.973: INFO: STEP: Gathering metrics W0626 00:08:40.500747 8 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 26 00:08:40.500: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:08:40.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7477" for this suite. • [SLOW TEST:10.277 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":294,"completed":89,"skipped":1343,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:08:40.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:08:58.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3548" for this suite. • [SLOW TEST:18.430 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":294,"completed":90,"skipped":1352,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:08:58.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 26 00:08:59.993: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 26 00:09:02.003: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728726939, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728726939, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728726940, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728726939, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 26 00:09:05.042: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 26 00:09:05.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7777-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:09:06.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5556" for this suite. STEP: Destroying namespace "webhook-5556-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.492 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":294,"completed":91,"skipped":1353,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:09:06.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-fa2adfe4-c303-4010-a5c1-174874ef9bc6 STEP: Creating a pod to test consume configMaps Jun 26 00:09:06.617: INFO: Waiting up to 5m0s for pod "pod-configmaps-bf24e9be-6498-4841-8e76-2b37c9cc8378" in namespace "configmap-8371" to be "Succeeded or Failed" Jun 26 00:09:06.630: INFO: Pod "pod-configmaps-bf24e9be-6498-4841-8e76-2b37c9cc8378": Phase="Pending", Reason="", readiness=false. Elapsed: 12.511999ms Jun 26 00:09:08.634: INFO: Pod "pod-configmaps-bf24e9be-6498-4841-8e76-2b37c9cc8378": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016788239s Jun 26 00:09:10.639: INFO: Pod "pod-configmaps-bf24e9be-6498-4841-8e76-2b37c9cc8378": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021664759s STEP: Saw pod success Jun 26 00:09:10.639: INFO: Pod "pod-configmaps-bf24e9be-6498-4841-8e76-2b37c9cc8378" satisfied condition "Succeeded or Failed" Jun 26 00:09:10.642: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-bf24e9be-6498-4841-8e76-2b37c9cc8378 container configmap-volume-test: STEP: delete the pod Jun 26 00:09:10.676: INFO: Waiting for pod pod-configmaps-bf24e9be-6498-4841-8e76-2b37c9cc8378 to disappear Jun 26 00:09:10.680: INFO: Pod pod-configmaps-bf24e9be-6498-4841-8e76-2b37c9cc8378 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:09:10.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8371" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":294,"completed":92,"skipped":1362,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:09:10.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-4104 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-4104 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4104 Jun 26 00:09:11.232: INFO: Found 0 stateful pods, waiting for 1 Jun 26 00:09:21.236: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jun 26 00:09:21.240: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4104 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 26 00:09:24.154: INFO: stderr: "I0626 00:09:23.995865 1377 log.go:172] (0xc000a12000) (0xc00067c820) Create stream\nI0626 00:09:23.995916 1377 log.go:172] (0xc000a12000) (0xc00067c820) Stream added, broadcasting: 1\nI0626 00:09:23.998196 1377 log.go:172] (0xc000a12000) Reply frame received for 1\nI0626 00:09:23.998226 1377 log.go:172] (0xc000a12000) (0xc000672000) Create stream\nI0626 00:09:23.998234 1377 log.go:172] (0xc000a12000) (0xc000672000) Stream added, broadcasting: 3\nI0626 00:09:23.999245 1377 log.go:172] (0xc000a12000) Reply frame received for 3\nI0626 00:09:23.999290 1377 log.go:172] (0xc000a12000) (0xc00063e000) Create stream\nI0626 00:09:23.999301 1377 log.go:172] (0xc000a12000) (0xc00063e000) Stream added, broadcasting: 5\nI0626 00:09:24.000443 1377 log.go:172] (0xc000a12000) Reply frame received for 5\nI0626 00:09:24.090056 1377 log.go:172] (0xc000a12000) Data frame received for 5\nI0626 00:09:24.090094 1377 log.go:172] (0xc00063e000) (5) Data frame handling\nI0626 00:09:24.090120 1377 log.go:172] (0xc00063e000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0626 00:09:24.143583 1377 log.go:172] (0xc000a12000) Data frame received for 3\nI0626 00:09:24.143624 1377 log.go:172] (0xc000672000) (3) Data frame handling\nI0626 00:09:24.143651 1377 log.go:172] (0xc000672000) (3) Data frame sent\nI0626 00:09:24.144047 1377 log.go:172] (0xc000a12000) Data frame received for 3\nI0626 00:09:24.144082 1377 log.go:172] (0xc000672000) (3) Data frame handling\nI0626 00:09:24.144108 1377 log.go:172] (0xc000a12000) Data frame received for 5\nI0626 00:09:24.144123 1377 log.go:172] (0xc00063e000) (5) Data frame handling\nI0626 00:09:24.146374 1377 log.go:172] (0xc000a12000) Data frame received for 1\nI0626 00:09:24.146431 1377 log.go:172] (0xc00067c820) (1) Data frame handling\nI0626 00:09:24.146460 1377 log.go:172] (0xc00067c820) (1) Data frame sent\nI0626 00:09:24.146499 1377 log.go:172] (0xc000a12000) (0xc00067c820) Stream removed, broadcasting: 1\nI0626 00:09:24.146534 1377 log.go:172] (0xc000a12000) Go away received\nI0626 00:09:24.147111 1377 log.go:172] (0xc000a12000) (0xc00067c820) Stream removed, broadcasting: 1\nI0626 00:09:24.147144 1377 log.go:172] (0xc000a12000) (0xc000672000) Stream removed, broadcasting: 3\nI0626 00:09:24.147166 1377 log.go:172] (0xc000a12000) (0xc00063e000) Stream removed, broadcasting: 5\n" Jun 26 00:09:24.154: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 26 00:09:24.154: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 26 00:09:24.157: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 26 00:09:34.163: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 26 00:09:34.163: INFO: Waiting for statefulset status.replicas updated to 0 Jun 26 00:09:34.227: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999425s Jun 26 00:09:35.231: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.947615882s Jun 26 00:09:36.236: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.943274717s Jun 26 00:09:37.240: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.938460938s Jun 26 00:09:38.244: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.934590302s Jun 26 00:09:39.248: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.930108275s Jun 26 00:09:40.253: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.926604811s Jun 26 00:09:41.257: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.921735881s Jun 26 00:09:42.262: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.917731458s Jun 26 00:09:43.267: INFO: Verifying statefulset ss doesn't scale past 1 for another 912.722073ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4104 Jun 26 00:09:44.271: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4104 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 00:09:44.716: INFO: stderr: "I0626 00:09:44.613339 1409 log.go:172] (0xc000b47760) (0xc000b2c320) Create stream\nI0626 00:09:44.613393 1409 log.go:172] (0xc000b47760) (0xc000b2c320) Stream added, broadcasting: 1\nI0626 00:09:44.617815 1409 log.go:172] (0xc000b47760) Reply frame received for 1\nI0626 00:09:44.617865 1409 log.go:172] (0xc000b47760) (0xc00070c320) Create stream\nI0626 00:09:44.617878 1409 log.go:172] (0xc000b47760) (0xc00070c320) Stream added, broadcasting: 3\nI0626 00:09:44.618896 1409 log.go:172] (0xc000b47760) Reply frame received for 3\nI0626 00:09:44.618939 1409 log.go:172] (0xc000b47760) (0xc000508a00) Create stream\nI0626 00:09:44.618950 1409 log.go:172] (0xc000b47760) (0xc000508a00) Stream added, broadcasting: 5\nI0626 00:09:44.619812 1409 log.go:172] (0xc000b47760) Reply frame received for 5\nI0626 00:09:44.706142 1409 log.go:172] (0xc000b47760) Data frame received for 5\nI0626 00:09:44.706175 1409 log.go:172] (0xc000508a00) (5) Data frame handling\nI0626 00:09:44.706191 1409 log.go:172] (0xc000508a00) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0626 00:09:44.706224 1409 log.go:172] (0xc000b47760) Data frame received for 3\nI0626 00:09:44.706233 1409 log.go:172] (0xc00070c320) (3) Data frame handling\nI0626 00:09:44.706243 1409 log.go:172] (0xc00070c320) (3) Data frame sent\nI0626 00:09:44.706251 1409 log.go:172] (0xc000b47760) Data frame received for 3\nI0626 00:09:44.706259 1409 log.go:172] (0xc00070c320) (3) Data frame handling\nI0626 00:09:44.706360 1409 log.go:172] (0xc000b47760) Data frame received for 5\nI0626 00:09:44.706385 1409 log.go:172] (0xc000508a00) (5) Data frame handling\nI0626 00:09:44.707877 1409 log.go:172] (0xc000b47760) Data frame received for 1\nI0626 00:09:44.707900 1409 log.go:172] (0xc000b2c320) (1) Data frame handling\nI0626 00:09:44.707923 1409 log.go:172] (0xc000b2c320) (1) Data frame sent\nI0626 00:09:44.708094 1409 log.go:172] (0xc000b47760) (0xc000b2c320) Stream removed, broadcasting: 1\nI0626 00:09:44.708370 1409 log.go:172] (0xc000b47760) Go away received\nI0626 00:09:44.708413 1409 log.go:172] (0xc000b47760) (0xc000b2c320) Stream removed, broadcasting: 1\nI0626 00:09:44.708435 1409 log.go:172] (0xc000b47760) (0xc00070c320) Stream removed, broadcasting: 3\nI0626 00:09:44.708441 1409 log.go:172] (0xc000b47760) (0xc000508a00) Stream removed, broadcasting: 5\n" Jun 26 00:09:44.716: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 26 00:09:44.716: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 26 00:09:44.719: INFO: Found 1 stateful pods, waiting for 3 Jun 26 00:09:54.724: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 26 00:09:54.724: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 26 00:09:54.724: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jun 26 00:09:54.733: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4104 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 26 00:09:54.932: INFO: stderr: "I0626 00:09:54.869711 1430 log.go:172] (0xc000baf290) (0xc00059d180) Create stream\nI0626 00:09:54.869768 1430 log.go:172] (0xc000baf290) (0xc00059d180) Stream added, broadcasting: 1\nI0626 00:09:54.873507 1430 log.go:172] (0xc000baf290) Reply frame received for 1\nI0626 00:09:54.873554 1430 log.go:172] (0xc000baf290) (0xc0003e1cc0) Create stream\nI0626 00:09:54.873570 1430 log.go:172] (0xc000baf290) (0xc0003e1cc0) Stream added, broadcasting: 3\nI0626 00:09:54.874956 1430 log.go:172] (0xc000baf290) Reply frame received for 3\nI0626 00:09:54.874981 1430 log.go:172] (0xc000baf290) (0xc00051a500) Create stream\nI0626 00:09:54.874998 1430 log.go:172] (0xc000baf290) (0xc00051a500) Stream added, broadcasting: 5\nI0626 00:09:54.876449 1430 log.go:172] (0xc000baf290) Reply frame received for 5\nI0626 00:09:54.925827 1430 log.go:172] (0xc000baf290) Data frame received for 5\nI0626 00:09:54.925867 1430 log.go:172] (0xc00051a500) (5) Data frame handling\nI0626 00:09:54.925882 1430 log.go:172] (0xc00051a500) (5) Data frame sent\nI0626 00:09:54.925891 1430 log.go:172] (0xc000baf290) Data frame received for 5\nI0626 00:09:54.925898 1430 log.go:172] (0xc00051a500) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0626 00:09:54.925919 1430 log.go:172] (0xc000baf290) Data frame received for 3\nI0626 00:09:54.925928 1430 log.go:172] (0xc0003e1cc0) (3) Data frame handling\nI0626 00:09:54.925940 1430 log.go:172] (0xc0003e1cc0) (3) Data frame sent\nI0626 00:09:54.925948 1430 log.go:172] (0xc000baf290) Data frame received for 3\nI0626 00:09:54.925956 1430 log.go:172] (0xc0003e1cc0) (3) Data frame handling\nI0626 00:09:54.927384 1430 log.go:172] (0xc000baf290) Data frame received for 1\nI0626 00:09:54.927420 1430 log.go:172] (0xc00059d180) (1) Data frame handling\nI0626 00:09:54.927444 1430 log.go:172] (0xc00059d180) (1) Data frame sent\nI0626 00:09:54.927471 1430 log.go:172] (0xc000baf290) (0xc00059d180) Stream removed, broadcasting: 1\nI0626 00:09:54.927501 1430 log.go:172] (0xc000baf290) Go away received\nI0626 00:09:54.927847 1430 log.go:172] (0xc000baf290) (0xc00059d180) Stream removed, broadcasting: 1\nI0626 00:09:54.927867 1430 log.go:172] (0xc000baf290) (0xc0003e1cc0) Stream removed, broadcasting: 3\nI0626 00:09:54.927876 1430 log.go:172] (0xc000baf290) (0xc00051a500) Stream removed, broadcasting: 5\n" Jun 26 00:09:54.932: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 26 00:09:54.932: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 26 00:09:54.933: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4104 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 26 00:09:55.195: INFO: stderr: "I0626 00:09:55.060001 1450 log.go:172] (0xc000afb130) (0xc000afe140) Create stream\nI0626 00:09:55.060052 1450 log.go:172] (0xc000afb130) (0xc000afe140) Stream added, broadcasting: 1\nI0626 00:09:55.066682 1450 log.go:172] (0xc000afb130) Reply frame received for 1\nI0626 00:09:55.066725 1450 log.go:172] (0xc000afb130) (0xc0006e5c20) Create stream\nI0626 00:09:55.066735 1450 log.go:172] (0xc000afb130) (0xc0006e5c20) Stream added, broadcasting: 3\nI0626 00:09:55.067470 1450 log.go:172] (0xc000afb130) Reply frame received for 3\nI0626 00:09:55.067516 1450 log.go:172] (0xc000afb130) (0xc0006f3360) Create stream\nI0626 00:09:55.067529 1450 log.go:172] (0xc000afb130) (0xc0006f3360) Stream added, broadcasting: 5\nI0626 00:09:55.068174 1450 log.go:172] (0xc000afb130) Reply frame received for 5\nI0626 00:09:55.151282 1450 log.go:172] (0xc000afb130) Data frame received for 5\nI0626 00:09:55.151308 1450 log.go:172] (0xc0006f3360) (5) Data frame handling\nI0626 00:09:55.151324 1450 log.go:172] (0xc0006f3360) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0626 00:09:55.187227 1450 log.go:172] (0xc000afb130) Data frame received for 3\nI0626 00:09:55.187386 1450 log.go:172] (0xc0006e5c20) (3) Data frame handling\nI0626 00:09:55.187425 1450 log.go:172] (0xc0006e5c20) (3) Data frame sent\nI0626 00:09:55.187755 1450 log.go:172] (0xc000afb130) Data frame received for 3\nI0626 00:09:55.187772 1450 log.go:172] (0xc0006e5c20) (3) Data frame handling\nI0626 00:09:55.187802 1450 log.go:172] (0xc000afb130) Data frame received for 5\nI0626 00:09:55.187816 1450 log.go:172] (0xc0006f3360) (5) Data frame handling\nI0626 00:09:55.189633 1450 log.go:172] (0xc000afb130) Data frame received for 1\nI0626 00:09:55.189656 1450 log.go:172] (0xc000afe140) (1) Data frame handling\nI0626 00:09:55.189672 1450 log.go:172] (0xc000afe140) (1) Data frame sent\nI0626 00:09:55.189844 1450 log.go:172] (0xc000afb130) (0xc000afe140) Stream removed, broadcasting: 1\nI0626 00:09:55.189889 1450 log.go:172] (0xc000afb130) Go away received\nI0626 00:09:55.190191 1450 log.go:172] (0xc000afb130) (0xc000afe140) Stream removed, broadcasting: 1\nI0626 00:09:55.190223 1450 log.go:172] (0xc000afb130) (0xc0006e5c20) Stream removed, broadcasting: 3\nI0626 00:09:55.190234 1450 log.go:172] (0xc000afb130) (0xc0006f3360) Stream removed, broadcasting: 5\n" Jun 26 00:09:55.196: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 26 00:09:55.196: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 26 00:09:55.196: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4104 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 26 00:09:55.460: INFO: stderr: "I0626 00:09:55.337525 1471 log.go:172] (0xc000a031e0) (0xc000b54500) Create stream\nI0626 00:09:55.337580 1471 log.go:172] (0xc000a031e0) (0xc000b54500) Stream added, broadcasting: 1\nI0626 00:09:55.342630 1471 log.go:172] (0xc000a031e0) Reply frame received for 1\nI0626 00:09:55.342674 1471 log.go:172] (0xc000a031e0) (0xc0003ee460) Create stream\nI0626 00:09:55.342686 1471 log.go:172] (0xc000a031e0) (0xc0003ee460) Stream added, broadcasting: 3\nI0626 00:09:55.343591 1471 log.go:172] (0xc000a031e0) Reply frame received for 3\nI0626 00:09:55.343638 1471 log.go:172] (0xc000a031e0) (0xc00067e5a0) Create stream\nI0626 00:09:55.343650 1471 log.go:172] (0xc000a031e0) (0xc00067e5a0) Stream added, broadcasting: 5\nI0626 00:09:55.344527 1471 log.go:172] (0xc000a031e0) Reply frame received for 5\nI0626 00:09:55.412878 1471 log.go:172] (0xc000a031e0) Data frame received for 5\nI0626 00:09:55.412903 1471 log.go:172] (0xc00067e5a0) (5) Data frame handling\nI0626 00:09:55.412917 1471 log.go:172] (0xc00067e5a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0626 00:09:55.451857 1471 log.go:172] (0xc000a031e0) Data frame received for 3\nI0626 00:09:55.451898 1471 log.go:172] (0xc000a031e0) Data frame received for 5\nI0626 00:09:55.451916 1471 log.go:172] (0xc00067e5a0) (5) Data frame handling\nI0626 00:09:55.451958 1471 log.go:172] (0xc0003ee460) (3) Data frame handling\nI0626 00:09:55.452010 1471 log.go:172] (0xc0003ee460) (3) Data frame sent\nI0626 00:09:55.452032 1471 log.go:172] (0xc000a031e0) Data frame received for 3\nI0626 00:09:55.452051 1471 log.go:172] (0xc0003ee460) (3) Data frame handling\nI0626 00:09:55.454332 1471 log.go:172] (0xc000a031e0) Data frame received for 1\nI0626 00:09:55.454384 1471 log.go:172] (0xc000b54500) (1) Data frame handling\nI0626 00:09:55.454421 1471 log.go:172] (0xc000b54500) (1) Data frame sent\nI0626 00:09:55.454450 1471 log.go:172] (0xc000a031e0) (0xc000b54500) Stream removed, broadcasting: 1\nI0626 00:09:55.454479 1471 log.go:172] (0xc000a031e0) Go away received\nI0626 00:09:55.454769 1471 log.go:172] (0xc000a031e0) (0xc000b54500) Stream removed, broadcasting: 1\nI0626 00:09:55.454787 1471 log.go:172] (0xc000a031e0) (0xc0003ee460) Stream removed, broadcasting: 3\nI0626 00:09:55.454797 1471 log.go:172] (0xc000a031e0) (0xc00067e5a0) Stream removed, broadcasting: 5\n" Jun 26 00:09:55.460: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 26 00:09:55.460: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 26 00:09:55.460: INFO: Waiting for statefulset status.replicas updated to 0 Jun 26 00:09:55.465: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Jun 26 00:10:05.496: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 26 00:10:05.496: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 26 00:10:05.496: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 26 00:10:05.537: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999547s Jun 26 00:10:06.542: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.968270224s Jun 26 00:10:07.547: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.962893179s Jun 26 00:10:08.552: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.958624755s Jun 26 00:10:09.558: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.952973869s Jun 26 00:10:10.564: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.947136893s Jun 26 00:10:11.569: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.941210755s Jun 26 00:10:12.575: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.935732816s Jun 26 00:10:13.581: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.929852634s Jun 26 00:10:14.586: INFO: Verifying statefulset ss doesn't scale past 3 for another 924.499705ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4104 Jun 26 00:10:15.592: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4104 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 00:10:15.827: INFO: stderr: "I0626 00:10:15.740023 1491 log.go:172] (0xc000c49340) (0xc00086cc80) Create stream\nI0626 00:10:15.740109 1491 log.go:172] (0xc000c49340) (0xc00086cc80) Stream added, broadcasting: 1\nI0626 00:10:15.744906 1491 log.go:172] (0xc000c49340) Reply frame received for 1\nI0626 00:10:15.744951 1491 log.go:172] (0xc000c49340) (0xc000861860) Create stream\nI0626 00:10:15.744965 1491 log.go:172] (0xc000c49340) (0xc000861860) Stream added, broadcasting: 3\nI0626 00:10:15.746327 1491 log.go:172] (0xc000c49340) Reply frame received for 3\nI0626 00:10:15.746385 1491 log.go:172] (0xc000c49340) (0xc00066aa00) Create stream\nI0626 00:10:15.746401 1491 log.go:172] (0xc000c49340) (0xc00066aa00) Stream added, broadcasting: 5\nI0626 00:10:15.747339 1491 log.go:172] (0xc000c49340) Reply frame received for 5\nI0626 00:10:15.820103 1491 log.go:172] (0xc000c49340) Data frame received for 3\nI0626 00:10:15.820149 1491 log.go:172] (0xc000861860) (3) Data frame handling\nI0626 00:10:15.820172 1491 log.go:172] (0xc000861860) (3) Data frame sent\nI0626 00:10:15.820185 1491 log.go:172] (0xc000c49340) Data frame received for 3\nI0626 00:10:15.820202 1491 log.go:172] (0xc000861860) (3) Data frame handling\nI0626 00:10:15.820315 1491 log.go:172] (0xc000c49340) Data frame received for 5\nI0626 00:10:15.820340 1491 log.go:172] (0xc00066aa00) (5) Data frame handling\nI0626 00:10:15.820357 1491 log.go:172] (0xc00066aa00) (5) Data frame sent\nI0626 00:10:15.820375 1491 log.go:172] (0xc000c49340) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0626 00:10:15.820392 1491 log.go:172] (0xc00066aa00) (5) Data frame handling\nI0626 00:10:15.822034 1491 log.go:172] (0xc000c49340) Data frame received for 1\nI0626 00:10:15.822049 1491 log.go:172] (0xc00086cc80) (1) Data frame handling\nI0626 00:10:15.822056 1491 log.go:172] (0xc00086cc80) (1) Data frame sent\nI0626 00:10:15.822217 1491 log.go:172] (0xc000c49340) (0xc00086cc80) Stream removed, broadcasting: 1\nI0626 00:10:15.822255 1491 log.go:172] (0xc000c49340) Go away received\nI0626 00:10:15.822692 1491 log.go:172] (0xc000c49340) (0xc00086cc80) Stream removed, broadcasting: 1\nI0626 00:10:15.822716 1491 log.go:172] (0xc000c49340) (0xc000861860) Stream removed, broadcasting: 3\nI0626 00:10:15.822729 1491 log.go:172] (0xc000c49340) (0xc00066aa00) Stream removed, broadcasting: 5\n" Jun 26 00:10:15.828: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 26 00:10:15.828: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 26 00:10:15.828: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4104 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 00:10:16.070: INFO: stderr: "I0626 00:10:15.998796 1512 log.go:172] (0xc000016000) (0xc0002aa1e0) Create stream\nI0626 00:10:15.998868 1512 log.go:172] (0xc000016000) (0xc0002aa1e0) Stream added, broadcasting: 1\nI0626 00:10:16.001049 1512 log.go:172] (0xc000016000) Reply frame received for 1\nI0626 00:10:16.001108 1512 log.go:172] (0xc000016000) (0xc0002ab040) Create stream\nI0626 00:10:16.001310 1512 log.go:172] (0xc000016000) (0xc0002ab040) Stream added, broadcasting: 3\nI0626 00:10:16.002188 1512 log.go:172] (0xc000016000) Reply frame received for 3\nI0626 00:10:16.002248 1512 log.go:172] (0xc000016000) (0xc000139d60) Create stream\nI0626 00:10:16.002273 1512 log.go:172] (0xc000016000) (0xc000139d60) Stream added, broadcasting: 5\nI0626 00:10:16.003306 1512 log.go:172] (0xc000016000) Reply frame received for 5\nI0626 00:10:16.060952 1512 log.go:172] (0xc000016000) Data frame received for 3\nI0626 00:10:16.060999 1512 log.go:172] (0xc0002ab040) (3) Data frame handling\nI0626 00:10:16.061015 1512 log.go:172] (0xc0002ab040) (3) Data frame sent\nI0626 00:10:16.061038 1512 log.go:172] (0xc000016000) Data frame received for 3\nI0626 00:10:16.061061 1512 log.go:172] (0xc0002ab040) (3) Data frame handling\nI0626 00:10:16.061103 1512 log.go:172] (0xc000016000) Data frame received for 5\nI0626 00:10:16.061333 1512 log.go:172] (0xc000139d60) (5) Data frame handling\nI0626 00:10:16.061369 1512 log.go:172] (0xc000139d60) (5) Data frame sent\nI0626 00:10:16.061400 1512 log.go:172] (0xc000016000) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0626 00:10:16.061425 1512 log.go:172] (0xc000139d60) (5) Data frame handling\nI0626 00:10:16.062984 1512 log.go:172] (0xc000016000) Data frame received for 1\nI0626 00:10:16.063083 1512 log.go:172] (0xc0002aa1e0) (1) Data frame handling\nI0626 00:10:16.063120 1512 log.go:172] (0xc0002aa1e0) (1) Data frame sent\nI0626 00:10:16.063147 1512 log.go:172] (0xc000016000) (0xc0002aa1e0) Stream removed, broadcasting: 1\nI0626 00:10:16.063238 1512 log.go:172] (0xc000016000) Go away received\nI0626 00:10:16.063609 1512 log.go:172] (0xc000016000) (0xc0002aa1e0) Stream removed, broadcasting: 1\nI0626 00:10:16.063627 1512 log.go:172] (0xc000016000) (0xc0002ab040) Stream removed, broadcasting: 3\nI0626 00:10:16.063638 1512 log.go:172] (0xc000016000) (0xc000139d60) Stream removed, broadcasting: 5\n" Jun 26 00:10:16.070: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 26 00:10:16.070: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 26 00:10:16.070: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4104 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 00:10:16.278: INFO: stderr: "I0626 00:10:16.202234 1534 log.go:172] (0xc000a2f080) (0xc000566000) Create stream\nI0626 00:10:16.202301 1534 log.go:172] (0xc000a2f080) (0xc000566000) Stream added, broadcasting: 1\nI0626 00:10:16.208706 1534 log.go:172] (0xc000a2f080) Reply frame received for 1\nI0626 00:10:16.208756 1534 log.go:172] (0xc000a2f080) (0xc000484140) Create stream\nI0626 00:10:16.208770 1534 log.go:172] (0xc000a2f080) (0xc000484140) Stream added, broadcasting: 3\nI0626 00:10:16.209890 1534 log.go:172] (0xc000a2f080) Reply frame received for 3\nI0626 00:10:16.209936 1534 log.go:172] (0xc000a2f080) (0xc000446960) Create stream\nI0626 00:10:16.209949 1534 log.go:172] (0xc000a2f080) (0xc000446960) Stream added, broadcasting: 5\nI0626 00:10:16.210823 1534 log.go:172] (0xc000a2f080) Reply frame received for 5\nI0626 00:10:16.266337 1534 log.go:172] (0xc000a2f080) Data frame received for 5\nI0626 00:10:16.266384 1534 log.go:172] (0xc000446960) (5) Data frame handling\nI0626 00:10:16.266409 1534 log.go:172] (0xc000446960) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0626 00:10:16.266538 1534 log.go:172] (0xc000a2f080) Data frame received for 3\nI0626 00:10:16.266571 1534 log.go:172] (0xc000484140) (3) Data frame handling\nI0626 00:10:16.266594 1534 log.go:172] (0xc000484140) (3) Data frame sent\nI0626 00:10:16.266615 1534 log.go:172] (0xc000a2f080) Data frame received for 3\nI0626 00:10:16.266639 1534 log.go:172] (0xc000484140) (3) Data frame handling\nI0626 00:10:16.266770 1534 log.go:172] (0xc000a2f080) Data frame received for 5\nI0626 00:10:16.266806 1534 log.go:172] (0xc000446960) (5) Data frame handling\nI0626 00:10:16.268325 1534 log.go:172] (0xc000a2f080) Data frame received for 1\nI0626 00:10:16.268367 1534 log.go:172] (0xc000566000) (1) Data frame handling\nI0626 00:10:16.268405 1534 log.go:172] (0xc000566000) (1) Data frame sent\nI0626 00:10:16.268435 1534 log.go:172] (0xc000a2f080) (0xc000566000) Stream removed, broadcasting: 1\nI0626 00:10:16.268942 1534 log.go:172] (0xc000a2f080) (0xc000566000) Stream removed, broadcasting: 1\nI0626 00:10:16.268982 1534 log.go:172] (0xc000a2f080) (0xc000484140) Stream removed, broadcasting: 3\nI0626 00:10:16.269481 1534 log.go:172] (0xc000a2f080) (0xc000446960) Stream removed, broadcasting: 5\n" Jun 26 00:10:16.278: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 26 00:10:16.278: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 26 00:10:16.278: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jun 26 00:10:36.292: INFO: Deleting all statefulset in ns statefulset-4104 Jun 26 00:10:36.295: INFO: Scaling statefulset ss to 0 Jun 26 00:10:36.304: INFO: Waiting for statefulset status.replicas updated to 0 Jun 26 00:10:36.306: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:10:36.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4104" for this suite. • [SLOW TEST:85.641 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":294,"completed":93,"skipped":1365,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:10:36.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all Jun 26 00:10:36.419: INFO: Waiting up to 5m0s for pod "client-containers-034bd5a7-6705-4626-ba1c-5a26478d19e1" in namespace "containers-3110" to be "Succeeded or Failed" Jun 26 00:10:36.426: INFO: Pod "client-containers-034bd5a7-6705-4626-ba1c-5a26478d19e1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.233611ms Jun 26 00:10:38.466: INFO: Pod "client-containers-034bd5a7-6705-4626-ba1c-5a26478d19e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046715176s Jun 26 00:10:40.496: INFO: Pod "client-containers-034bd5a7-6705-4626-ba1c-5a26478d19e1": Phase="Running", Reason="", readiness=true. Elapsed: 4.076711525s Jun 26 00:10:42.500: INFO: Pod "client-containers-034bd5a7-6705-4626-ba1c-5a26478d19e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.080249344s STEP: Saw pod success Jun 26 00:10:42.500: INFO: Pod "client-containers-034bd5a7-6705-4626-ba1c-5a26478d19e1" satisfied condition "Succeeded or Failed" Jun 26 00:10:42.502: INFO: Trying to get logs from node latest-worker pod client-containers-034bd5a7-6705-4626-ba1c-5a26478d19e1 container test-container: STEP: delete the pod Jun 26 00:10:42.570: INFO: Waiting for pod client-containers-034bd5a7-6705-4626-ba1c-5a26478d19e1 to disappear Jun 26 00:10:42.581: INFO: Pod client-containers-034bd5a7-6705-4626-ba1c-5a26478d19e1 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:10:42.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3110" for this suite. • [SLOW TEST:6.260 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":294,"completed":94,"skipped":1441,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:10:42.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 26 00:10:42.676: INFO: Waiting up to 5m0s for pod "downwardapi-volume-97f65f25-548a-4bed-8d44-5e395f178337" in namespace "downward-api-9221" to be "Succeeded or Failed" Jun 26 00:10:42.690: INFO: Pod "downwardapi-volume-97f65f25-548a-4bed-8d44-5e395f178337": Phase="Pending", Reason="", readiness=false. Elapsed: 14.27502ms Jun 26 00:10:44.695: INFO: Pod "downwardapi-volume-97f65f25-548a-4bed-8d44-5e395f178337": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018958819s Jun 26 00:10:46.699: INFO: Pod "downwardapi-volume-97f65f25-548a-4bed-8d44-5e395f178337": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02332705s STEP: Saw pod success Jun 26 00:10:46.700: INFO: Pod "downwardapi-volume-97f65f25-548a-4bed-8d44-5e395f178337" satisfied condition "Succeeded or Failed" Jun 26 00:10:46.703: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-97f65f25-548a-4bed-8d44-5e395f178337 container client-container: STEP: delete the pod Jun 26 00:10:46.902: INFO: Waiting for pod downwardapi-volume-97f65f25-548a-4bed-8d44-5e395f178337 to disappear Jun 26 00:10:46.968: INFO: Pod downwardapi-volume-97f65f25-548a-4bed-8d44-5e395f178337 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:10:46.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9221" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":294,"completed":95,"skipped":1452,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:10:46.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-ce48df2a-4864-45e2-8d64-8a4beeaab850 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-ce48df2a-4864-45e2-8d64-8a4beeaab850 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:10:53.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1400" for this suite. • [SLOW TEST:6.299 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":294,"completed":96,"skipped":1452,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:10:53.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:809 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-8580 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8580 to expose endpoints map[] Jun 26 00:10:53.466: INFO: Get endpoints failed (66.786166ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jun 26 00:10:54.470: INFO: successfully validated that service endpoint-test2 in namespace services-8580 exposes endpoints map[] (1.071077136s elapsed) STEP: Creating pod pod1 in namespace services-8580 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8580 to expose endpoints map[pod1:[80]] Jun 26 00:10:58.655: INFO: successfully validated that service endpoint-test2 in namespace services-8580 exposes endpoints map[pod1:[80]] (4.177298896s elapsed) STEP: Creating pod pod2 in namespace services-8580 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8580 to expose endpoints map[pod1:[80] pod2:[80]] Jun 26 00:11:01.902: INFO: successfully validated that service endpoint-test2 in namespace services-8580 exposes endpoints map[pod1:[80] pod2:[80]] (3.242081067s elapsed) STEP: Deleting pod pod1 in namespace services-8580 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8580 to expose endpoints map[pod2:[80]] Jun 26 00:11:02.951: INFO: successfully validated that service endpoint-test2 in namespace services-8580 exposes endpoints map[pod2:[80]] (1.042724379s elapsed) STEP: Deleting pod pod2 in namespace services-8580 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8580 to expose endpoints map[] Jun 26 00:11:02.978: INFO: successfully validated that service endpoint-test2 in namespace services-8580 exposes endpoints map[] (23.226425ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:11:03.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8580" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:813 • [SLOW TEST:9.803 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":294,"completed":97,"skipped":1458,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:11:03.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments Jun 26 00:11:03.151: INFO: Waiting up to 5m0s for pod "client-containers-6261044f-f5d2-4e5f-98bc-1c086b2e847e" in namespace "containers-4721" to be "Succeeded or Failed" Jun 26 00:11:03.191: INFO: Pod "client-containers-6261044f-f5d2-4e5f-98bc-1c086b2e847e": Phase="Pending", Reason="", readiness=false. Elapsed: 39.854404ms Jun 26 00:11:05.269: INFO: Pod "client-containers-6261044f-f5d2-4e5f-98bc-1c086b2e847e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117634085s Jun 26 00:11:07.272: INFO: Pod "client-containers-6261044f-f5d2-4e5f-98bc-1c086b2e847e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.121169179s STEP: Saw pod success Jun 26 00:11:07.272: INFO: Pod "client-containers-6261044f-f5d2-4e5f-98bc-1c086b2e847e" satisfied condition "Succeeded or Failed" Jun 26 00:11:07.274: INFO: Trying to get logs from node latest-worker pod client-containers-6261044f-f5d2-4e5f-98bc-1c086b2e847e container test-container: STEP: delete the pod Jun 26 00:11:07.915: INFO: Waiting for pod client-containers-6261044f-f5d2-4e5f-98bc-1c086b2e847e to disappear Jun 26 00:11:07.987: INFO: Pod client-containers-6261044f-f5d2-4e5f-98bc-1c086b2e847e no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:11:07.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4721" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":294,"completed":98,"skipped":1465,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:11:07.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 26 00:11:08.065: INFO: Waiting up to 5m0s for pod "downwardapi-volume-444b46de-42b2-48a5-8bab-1d6e444800b4" in namespace "projected-7410" to be "Succeeded or Failed" Jun 26 00:11:08.082: INFO: Pod "downwardapi-volume-444b46de-42b2-48a5-8bab-1d6e444800b4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.512434ms Jun 26 00:11:10.120: INFO: Pod "downwardapi-volume-444b46de-42b2-48a5-8bab-1d6e444800b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054914648s Jun 26 00:11:12.168: INFO: Pod "downwardapi-volume-444b46de-42b2-48a5-8bab-1d6e444800b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.102475596s STEP: Saw pod success Jun 26 00:11:12.168: INFO: Pod "downwardapi-volume-444b46de-42b2-48a5-8bab-1d6e444800b4" satisfied condition "Succeeded or Failed" Jun 26 00:11:12.171: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-444b46de-42b2-48a5-8bab-1d6e444800b4 container client-container: STEP: delete the pod Jun 26 00:11:12.339: INFO: Waiting for pod downwardapi-volume-444b46de-42b2-48a5-8bab-1d6e444800b4 to disappear Jun 26 00:11:12.362: INFO: Pod downwardapi-volume-444b46de-42b2-48a5-8bab-1d6e444800b4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:11:12.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7410" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":99,"skipped":1473,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:11:12.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-2963 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 26 00:11:12.431: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 26 00:11:12.509: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 26 00:11:14.528: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 26 00:11:16.513: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 26 00:11:18.513: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 26 00:11:20.513: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 26 00:11:22.514: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 26 00:11:24.514: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 26 00:11:26.514: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 26 00:11:28.514: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 26 00:11:30.514: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 26 00:11:32.519: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 26 00:11:34.514: INFO: The status of Pod netserver-0 is Running (Ready = true) Jun 26 00:11:34.520: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jun 26 00:11:38.586: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.167:8080/dial?request=hostname&protocol=udp&host=10.244.1.106&port=8081&tries=1'] Namespace:pod-network-test-2963 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 00:11:38.586: INFO: >>> kubeConfig: /root/.kube/config I0626 00:11:38.648407 8 log.go:172] (0xc0010de580) (0xc002c58b40) Create stream I0626 00:11:38.648464 8 log.go:172] (0xc0010de580) (0xc002c58b40) Stream added, broadcasting: 1 I0626 00:11:38.651420 8 log.go:172] (0xc0010de580) Reply frame received for 1 I0626 00:11:38.651455 8 log.go:172] (0xc0010de580) (0xc002cd9900) Create stream I0626 00:11:38.651469 8 log.go:172] (0xc0010de580) (0xc002cd9900) Stream added, broadcasting: 3 I0626 00:11:38.652454 8 log.go:172] (0xc0010de580) Reply frame received for 3 I0626 00:11:38.652483 8 log.go:172] (0xc0010de580) (0xc00281cc80) Create stream I0626 00:11:38.652498 8 log.go:172] (0xc0010de580) (0xc00281cc80) Stream added, broadcasting: 5 I0626 00:11:38.653635 8 log.go:172] (0xc0010de580) Reply frame received for 5 I0626 00:11:38.830947 8 log.go:172] (0xc0010de580) Data frame received for 3 I0626 00:11:38.830972 8 log.go:172] (0xc002cd9900) (3) Data frame handling I0626 00:11:38.830980 8 log.go:172] (0xc002cd9900) (3) Data frame sent I0626 00:11:38.831487 8 log.go:172] (0xc0010de580) Data frame received for 5 I0626 00:11:38.831509 8 log.go:172] (0xc00281cc80) (5) Data frame handling I0626 00:11:38.831601 8 log.go:172] (0xc0010de580) Data frame received for 3 I0626 00:11:38.831671 8 log.go:172] (0xc002cd9900) (3) Data frame handling I0626 00:11:38.833867 8 log.go:172] (0xc0010de580) Data frame received for 1 I0626 00:11:38.833941 8 log.go:172] (0xc002c58b40) (1) Data frame handling I0626 00:11:38.833983 8 log.go:172] (0xc002c58b40) (1) Data frame sent I0626 00:11:38.834002 8 log.go:172] (0xc0010de580) (0xc002c58b40) Stream removed, broadcasting: 1 I0626 00:11:38.834020 8 log.go:172] (0xc0010de580) Go away received I0626 00:11:38.834476 8 log.go:172] (0xc0010de580) (0xc002c58b40) Stream removed, broadcasting: 1 I0626 00:11:38.834501 8 log.go:172] (0xc0010de580) (0xc002cd9900) Stream removed, broadcasting: 3 I0626 00:11:38.834513 8 log.go:172] (0xc0010de580) (0xc00281cc80) Stream removed, broadcasting: 5 Jun 26 00:11:38.834: INFO: Waiting for responses: map[] Jun 26 00:11:38.837: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.167:8080/dial?request=hostname&protocol=udp&host=10.244.2.166&port=8081&tries=1'] Namespace:pod-network-test-2963 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 00:11:38.837: INFO: >>> kubeConfig: /root/.kube/config I0626 00:11:38.860949 8 log.go:172] (0xc00293f970) (0xc00281d0e0) Create stream I0626 00:11:38.860973 8 log.go:172] (0xc00293f970) (0xc00281d0e0) Stream added, broadcasting: 1 I0626 00:11:38.863537 8 log.go:172] (0xc00293f970) Reply frame received for 1 I0626 00:11:38.863584 8 log.go:172] (0xc00293f970) (0xc002b80780) Create stream I0626 00:11:38.863598 8 log.go:172] (0xc00293f970) (0xc002b80780) Stream added, broadcasting: 3 I0626 00:11:38.864678 8 log.go:172] (0xc00293f970) Reply frame received for 3 I0626 00:11:38.864710 8 log.go:172] (0xc00293f970) (0xc002cd99a0) Create stream I0626 00:11:38.864721 8 log.go:172] (0xc00293f970) (0xc002cd99a0) Stream added, broadcasting: 5 I0626 00:11:38.866106 8 log.go:172] (0xc00293f970) Reply frame received for 5 I0626 00:11:38.936511 8 log.go:172] (0xc00293f970) Data frame received for 3 I0626 00:11:38.936541 8 log.go:172] (0xc002b80780) (3) Data frame handling I0626 00:11:38.936554 8 log.go:172] (0xc002b80780) (3) Data frame sent I0626 00:11:38.936771 8 log.go:172] (0xc00293f970) Data frame received for 3 I0626 00:11:38.936784 8 log.go:172] (0xc002b80780) (3) Data frame handling I0626 00:11:38.937608 8 log.go:172] (0xc00293f970) Data frame received for 5 I0626 00:11:38.937638 8 log.go:172] (0xc002cd99a0) (5) Data frame handling I0626 00:11:38.939039 8 log.go:172] (0xc00293f970) Data frame received for 1 I0626 00:11:38.939058 8 log.go:172] (0xc00281d0e0) (1) Data frame handling I0626 00:11:38.939079 8 log.go:172] (0xc00281d0e0) (1) Data frame sent I0626 00:11:38.939092 8 log.go:172] (0xc00293f970) (0xc00281d0e0) Stream removed, broadcasting: 1 I0626 00:11:38.939108 8 log.go:172] (0xc00293f970) Go away received I0626 00:11:38.939231 8 log.go:172] (0xc00293f970) (0xc00281d0e0) Stream removed, broadcasting: 1 I0626 00:11:38.939249 8 log.go:172] (0xc00293f970) (0xc002b80780) Stream removed, broadcasting: 3 I0626 00:11:38.939258 8 log.go:172] (0xc00293f970) (0xc002cd99a0) Stream removed, broadcasting: 5 Jun 26 00:11:38.939: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:11:38.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2963" for this suite. • [SLOW TEST:26.577 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":294,"completed":100,"skipped":1477,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:11:38.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:85 Jun 26 00:11:39.105: INFO: Waiting up to 1m0s for all nodes to be ready Jun 26 00:12:39.130: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Jun 26 00:12:39.171: INFO: Created pod: pod0-sched-preemption-low-priority Jun 26 00:12:39.215: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:13:01.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-7479" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:75 • [SLOW TEST:82.455 seconds] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":294,"completed":101,"skipped":1486,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:13:01.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:13:05.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5439" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":102,"skipped":1510,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:13:05.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:809 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:13:05.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8647" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:813 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":294,"completed":103,"skipped":1535,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:13:05.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-558/secret-test-64d778b6-3dc4-4a8b-aef8-c3fe204fd8c6 STEP: Creating a pod to test consume secrets Jun 26 00:13:05.817: INFO: Waiting up to 5m0s for pod "pod-configmaps-8e6d7278-f46b-4620-bb0d-3cf2f11ae39a" in namespace "secrets-558" to be "Succeeded or Failed" Jun 26 00:13:05.820: INFO: Pod "pod-configmaps-8e6d7278-f46b-4620-bb0d-3cf2f11ae39a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.491052ms Jun 26 00:13:08.090: INFO: Pod "pod-configmaps-8e6d7278-f46b-4620-bb0d-3cf2f11ae39a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.273838374s Jun 26 00:13:10.104: INFO: Pod "pod-configmaps-8e6d7278-f46b-4620-bb0d-3cf2f11ae39a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.287288592s STEP: Saw pod success Jun 26 00:13:10.104: INFO: Pod "pod-configmaps-8e6d7278-f46b-4620-bb0d-3cf2f11ae39a" satisfied condition "Succeeded or Failed" Jun 26 00:13:10.107: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-8e6d7278-f46b-4620-bb0d-3cf2f11ae39a container env-test: STEP: delete the pod Jun 26 00:13:10.134: INFO: Waiting for pod pod-configmaps-8e6d7278-f46b-4620-bb0d-3cf2f11ae39a to disappear Jun 26 00:13:10.154: INFO: Pod pod-configmaps-8e6d7278-f46b-4620-bb0d-3cf2f11ae39a no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:13:10.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-558" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":294,"completed":104,"skipped":1554,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:13:10.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 26 00:13:10.261: INFO: Waiting up to 5m0s for pod "pod-620b4d48-bd19-48ef-a25b-5c125af89de1" in namespace "emptydir-8539" to be "Succeeded or Failed" Jun 26 00:13:10.264: INFO: Pod "pod-620b4d48-bd19-48ef-a25b-5c125af89de1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.593886ms Jun 26 00:13:12.269: INFO: Pod "pod-620b4d48-bd19-48ef-a25b-5c125af89de1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008096939s Jun 26 00:13:14.272: INFO: Pod "pod-620b4d48-bd19-48ef-a25b-5c125af89de1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011548857s STEP: Saw pod success Jun 26 00:13:14.272: INFO: Pod "pod-620b4d48-bd19-48ef-a25b-5c125af89de1" satisfied condition "Succeeded or Failed" Jun 26 00:13:14.275: INFO: Trying to get logs from node latest-worker2 pod pod-620b4d48-bd19-48ef-a25b-5c125af89de1 container test-container: STEP: delete the pod Jun 26 00:13:14.433: INFO: Waiting for pod pod-620b4d48-bd19-48ef-a25b-5c125af89de1 to disappear Jun 26 00:13:14.467: INFO: Pod pod-620b4d48-bd19-48ef-a25b-5c125af89de1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:13:14.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8539" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":105,"skipped":1570,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:13:14.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-e3bf638c-ffc6-41d8-a408-09f7b2e8a808 STEP: Creating configMap with name cm-test-opt-upd-ff533261-c00d-40ec-809b-be1ff327c1b6 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-e3bf638c-ffc6-41d8-a408-09f7b2e8a808 STEP: Updating configmap cm-test-opt-upd-ff533261-c00d-40ec-809b-be1ff327c1b6 STEP: Creating configMap with name cm-test-opt-create-80b47e72-9ed9-4d2d-9f36-c0e4792c771d STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:13:24.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4101" for this suite. • [SLOW TEST:10.226 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":294,"completed":106,"skipped":1622,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:13:24.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-373ba8d4-b20b-48e5-8c94-6befdf160228 STEP: Creating a pod to test consume secrets Jun 26 00:13:24.813: INFO: Waiting up to 5m0s for pod "pod-secrets-4d5e6ea9-a2d9-4ee9-858e-731b8a1311da" in namespace "secrets-5275" to be "Succeeded or Failed" Jun 26 00:13:24.831: INFO: Pod "pod-secrets-4d5e6ea9-a2d9-4ee9-858e-731b8a1311da": Phase="Pending", Reason="", readiness=false. Elapsed: 18.127493ms Jun 26 00:13:26.835: INFO: Pod "pod-secrets-4d5e6ea9-a2d9-4ee9-858e-731b8a1311da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022517209s Jun 26 00:13:28.840: INFO: Pod "pod-secrets-4d5e6ea9-a2d9-4ee9-858e-731b8a1311da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02747575s STEP: Saw pod success Jun 26 00:13:28.840: INFO: Pod "pod-secrets-4d5e6ea9-a2d9-4ee9-858e-731b8a1311da" satisfied condition "Succeeded or Failed" Jun 26 00:13:28.844: INFO: Trying to get logs from node latest-worker pod pod-secrets-4d5e6ea9-a2d9-4ee9-858e-731b8a1311da container secret-volume-test: STEP: delete the pod Jun 26 00:13:28.878: INFO: Waiting for pod pod-secrets-4d5e6ea9-a2d9-4ee9-858e-731b8a1311da to disappear Jun 26 00:13:28.887: INFO: Pod pod-secrets-4d5e6ea9-a2d9-4ee9-858e-731b8a1311da no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:13:28.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5275" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":294,"completed":107,"skipped":1643,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:13:28.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 26 00:13:28.963: INFO: Waiting up to 5m0s for pod "pod-642c578c-6f14-4ac9-b560-184727622c87" in namespace "emptydir-445" to be "Succeeded or Failed" Jun 26 00:13:28.965: INFO: Pod "pod-642c578c-6f14-4ac9-b560-184727622c87": Phase="Pending", Reason="", readiness=false. Elapsed: 1.98206ms Jun 26 00:13:31.187: INFO: Pod "pod-642c578c-6f14-4ac9-b560-184727622c87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223708512s Jun 26 00:13:33.191: INFO: Pod "pod-642c578c-6f14-4ac9-b560-184727622c87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.228049489s STEP: Saw pod success Jun 26 00:13:33.191: INFO: Pod "pod-642c578c-6f14-4ac9-b560-184727622c87" satisfied condition "Succeeded or Failed" Jun 26 00:13:33.194: INFO: Trying to get logs from node latest-worker pod pod-642c578c-6f14-4ac9-b560-184727622c87 container test-container: STEP: delete the pod Jun 26 00:13:33.523: INFO: Waiting for pod pod-642c578c-6f14-4ac9-b560-184727622c87 to disappear Jun 26 00:13:33.557: INFO: Pod pod-642c578c-6f14-4ac9-b560-184727622c87 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:13:33.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-445" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":108,"skipped":1653,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:13:33.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 26 00:13:33.686: INFO: Waiting up to 5m0s for pod "downwardapi-volume-21b618b4-e514-4497-843a-d3dd251c3b04" in namespace "projected-7671" to be "Succeeded or Failed" Jun 26 00:13:33.691: INFO: Pod "downwardapi-volume-21b618b4-e514-4497-843a-d3dd251c3b04": Phase="Pending", Reason="", readiness=false. Elapsed: 4.863934ms Jun 26 00:13:35.875: INFO: Pod "downwardapi-volume-21b618b4-e514-4497-843a-d3dd251c3b04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.189342863s Jun 26 00:13:37.880: INFO: Pod "downwardapi-volume-21b618b4-e514-4497-843a-d3dd251c3b04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.194370066s STEP: Saw pod success Jun 26 00:13:37.880: INFO: Pod "downwardapi-volume-21b618b4-e514-4497-843a-d3dd251c3b04" satisfied condition "Succeeded or Failed" Jun 26 00:13:37.882: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-21b618b4-e514-4497-843a-d3dd251c3b04 container client-container: STEP: delete the pod Jun 26 00:13:37.930: INFO: Waiting for pod downwardapi-volume-21b618b4-e514-4497-843a-d3dd251c3b04 to disappear Jun 26 00:13:37.942: INFO: Pod downwardapi-volume-21b618b4-e514-4497-843a-d3dd251c3b04 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:13:37.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7671" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":294,"completed":109,"skipped":1653,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:13:37.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 26 00:13:38.062: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:13:38.087: INFO: Number of nodes with available pods: 0 Jun 26 00:13:38.087: INFO: Node latest-worker is running more than one daemon pod Jun 26 00:13:39.092: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:13:39.096: INFO: Number of nodes with available pods: 0 Jun 26 00:13:39.096: INFO: Node latest-worker is running more than one daemon pod Jun 26 00:13:40.240: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:13:40.244: INFO: Number of nodes with available pods: 0 Jun 26 00:13:40.244: INFO: Node latest-worker is running more than one daemon pod Jun 26 00:13:41.091: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:13:41.094: INFO: Number of nodes with available pods: 0 Jun 26 00:13:41.094: INFO: Node latest-worker is running more than one daemon pod Jun 26 00:13:42.188: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:13:42.191: INFO: Number of nodes with available pods: 0 Jun 26 00:13:42.191: INFO: Node latest-worker is running more than one daemon pod Jun 26 00:13:43.102: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:13:43.116: INFO: Number of nodes with available pods: 2 Jun 26 00:13:43.116: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jun 26 00:13:43.182: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:13:43.236: INFO: Number of nodes with available pods: 1 Jun 26 00:13:43.236: INFO: Node latest-worker is running more than one daemon pod Jun 26 00:13:44.421: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:13:44.426: INFO: Number of nodes with available pods: 1 Jun 26 00:13:44.426: INFO: Node latest-worker is running more than one daemon pod Jun 26 00:13:45.324: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:13:45.405: INFO: Number of nodes with available pods: 1 Jun 26 00:13:45.405: INFO: Node latest-worker is running more than one daemon pod Jun 26 00:13:46.277: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:13:46.281: INFO: Number of nodes with available pods: 1 Jun 26 00:13:46.281: INFO: Node latest-worker is running more than one daemon pod Jun 26 00:13:47.241: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:13:47.244: INFO: Number of nodes with available pods: 2 Jun 26 00:13:47.244: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9516, will wait for the garbage collector to delete the pods Jun 26 00:13:47.308: INFO: Deleting DaemonSet.extensions daemon-set took: 6.003348ms Jun 26 00:13:47.708: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.264814ms Jun 26 00:13:55.311: INFO: Number of nodes with available pods: 0 Jun 26 00:13:55.311: INFO: Number of running nodes: 0, number of available pods: 0 Jun 26 00:13:55.314: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9516/daemonsets","resourceVersion":"15912458"},"items":null} Jun 26 00:13:55.316: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9516/pods","resourceVersion":"15912458"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:13:55.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9516" for this suite. • [SLOW TEST:17.383 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":294,"completed":110,"skipped":1653,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:13:55.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 26 00:13:55.419: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0f7d71dc-f73e-4307-aa01-42cfa4ff7589" in namespace "downward-api-9201" to be "Succeeded or Failed" Jun 26 00:13:55.435: INFO: Pod "downwardapi-volume-0f7d71dc-f73e-4307-aa01-42cfa4ff7589": Phase="Pending", Reason="", readiness=false. Elapsed: 16.67817ms Jun 26 00:13:57.439: INFO: Pod "downwardapi-volume-0f7d71dc-f73e-4307-aa01-42cfa4ff7589": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020435947s Jun 26 00:13:59.443: INFO: Pod "downwardapi-volume-0f7d71dc-f73e-4307-aa01-42cfa4ff7589": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024562975s STEP: Saw pod success Jun 26 00:13:59.443: INFO: Pod "downwardapi-volume-0f7d71dc-f73e-4307-aa01-42cfa4ff7589" satisfied condition "Succeeded or Failed" Jun 26 00:13:59.446: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-0f7d71dc-f73e-4307-aa01-42cfa4ff7589 container client-container: STEP: delete the pod Jun 26 00:13:59.465: INFO: Waiting for pod downwardapi-volume-0f7d71dc-f73e-4307-aa01-42cfa4ff7589 to disappear Jun 26 00:13:59.538: INFO: Pod downwardapi-volume-0f7d71dc-f73e-4307-aa01-42cfa4ff7589 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:13:59.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9201" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":294,"completed":111,"skipped":1673,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:13:59.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-62c1e9a7-096b-47fd-a408-238e1fe8e221 STEP: Creating a pod to test consume configMaps Jun 26 00:13:59.650: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2e2bc5d4-1aa0-409b-8153-d932efccaacd" in namespace "projected-1078" to be "Succeeded or Failed" Jun 26 00:13:59.655: INFO: Pod "pod-projected-configmaps-2e2bc5d4-1aa0-409b-8153-d932efccaacd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.344947ms Jun 26 00:14:01.659: INFO: Pod "pod-projected-configmaps-2e2bc5d4-1aa0-409b-8153-d932efccaacd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008589406s Jun 26 00:14:03.664: INFO: Pod "pod-projected-configmaps-2e2bc5d4-1aa0-409b-8153-d932efccaacd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013574789s STEP: Saw pod success Jun 26 00:14:03.664: INFO: Pod "pod-projected-configmaps-2e2bc5d4-1aa0-409b-8153-d932efccaacd" satisfied condition "Succeeded or Failed" Jun 26 00:14:03.667: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-2e2bc5d4-1aa0-409b-8153-d932efccaacd container projected-configmap-volume-test: STEP: delete the pod Jun 26 00:14:03.788: INFO: Waiting for pod pod-projected-configmaps-2e2bc5d4-1aa0-409b-8153-d932efccaacd to disappear Jun 26 00:14:03.799: INFO: Pod pod-projected-configmaps-2e2bc5d4-1aa0-409b-8153-d932efccaacd no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:14:03.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1078" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":294,"completed":112,"skipped":1686,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} S ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:14:03.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 26 00:14:03.921: INFO: Waiting up to 5m0s for pod "downwardapi-volume-81c297e0-f09c-4b97-8239-4511403eb78c" in namespace "projected-2758" to be "Succeeded or Failed" Jun 26 00:14:03.951: INFO: Pod "downwardapi-volume-81c297e0-f09c-4b97-8239-4511403eb78c": Phase="Pending", Reason="", readiness=false. Elapsed: 30.570471ms Jun 26 00:14:05.968: INFO: Pod "downwardapi-volume-81c297e0-f09c-4b97-8239-4511403eb78c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04668539s Jun 26 00:14:07.979: INFO: Pod "downwardapi-volume-81c297e0-f09c-4b97-8239-4511403eb78c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058183283s STEP: Saw pod success Jun 26 00:14:07.979: INFO: Pod "downwardapi-volume-81c297e0-f09c-4b97-8239-4511403eb78c" satisfied condition "Succeeded or Failed" Jun 26 00:14:07.982: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-81c297e0-f09c-4b97-8239-4511403eb78c container client-container: STEP: delete the pod Jun 26 00:14:08.051: INFO: Waiting for pod downwardapi-volume-81c297e0-f09c-4b97-8239-4511403eb78c to disappear Jun 26 00:14:08.063: INFO: Pod downwardapi-volume-81c297e0-f09c-4b97-8239-4511403eb78c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:14:08.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2758" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":294,"completed":113,"skipped":1687,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:14:08.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Jun 26 00:14:08.119: INFO: Waiting up to 5m0s for pod "downward-api-4fdb479f-cfdb-4b06-a884-23fe543ea38a" in namespace "downward-api-5131" to be "Succeeded or Failed" Jun 26 00:14:08.143: INFO: Pod "downward-api-4fdb479f-cfdb-4b06-a884-23fe543ea38a": Phase="Pending", Reason="", readiness=false. Elapsed: 23.119618ms Jun 26 00:14:10.147: INFO: Pod "downward-api-4fdb479f-cfdb-4b06-a884-23fe543ea38a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027426015s Jun 26 00:14:12.158: INFO: Pod "downward-api-4fdb479f-cfdb-4b06-a884-23fe543ea38a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03843004s STEP: Saw pod success Jun 26 00:14:12.158: INFO: Pod "downward-api-4fdb479f-cfdb-4b06-a884-23fe543ea38a" satisfied condition "Succeeded or Failed" Jun 26 00:14:12.161: INFO: Trying to get logs from node latest-worker pod downward-api-4fdb479f-cfdb-4b06-a884-23fe543ea38a container dapi-container: STEP: delete the pod Jun 26 00:14:12.224: INFO: Waiting for pod downward-api-4fdb479f-cfdb-4b06-a884-23fe543ea38a to disappear Jun 26 00:14:12.242: INFO: Pod downward-api-4fdb479f-cfdb-4b06-a884-23fe543ea38a no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:14:12.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5131" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":294,"completed":114,"skipped":1720,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:14:12.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition Jun 26 00:14:12.433: INFO: Waiting up to 5m0s for pod "var-expansion-ba2e47df-9a01-4cc8-b82a-800c8dcc6e10" in namespace "var-expansion-6486" to be "Succeeded or Failed" Jun 26 00:14:12.530: INFO: Pod "var-expansion-ba2e47df-9a01-4cc8-b82a-800c8dcc6e10": Phase="Pending", Reason="", readiness=false. Elapsed: 97.067808ms Jun 26 00:14:14.535: INFO: Pod "var-expansion-ba2e47df-9a01-4cc8-b82a-800c8dcc6e10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101806025s Jun 26 00:14:16.539: INFO: Pod "var-expansion-ba2e47df-9a01-4cc8-b82a-800c8dcc6e10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.106518251s STEP: Saw pod success Jun 26 00:14:16.539: INFO: Pod "var-expansion-ba2e47df-9a01-4cc8-b82a-800c8dcc6e10" satisfied condition "Succeeded or Failed" Jun 26 00:14:16.542: INFO: Trying to get logs from node latest-worker2 pod var-expansion-ba2e47df-9a01-4cc8-b82a-800c8dcc6e10 container dapi-container: STEP: delete the pod Jun 26 00:14:16.712: INFO: Waiting for pod var-expansion-ba2e47df-9a01-4cc8-b82a-800c8dcc6e10 to disappear Jun 26 00:14:16.821: INFO: Pod var-expansion-ba2e47df-9a01-4cc8-b82a-800c8dcc6e10 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:14:16.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6486" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":294,"completed":115,"skipped":1722,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:14:16.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Jun 26 00:14:16.994: INFO: Waiting up to 5m0s for pod "downward-api-4dcf461e-9538-40a8-9ec9-ce247529e46f" in namespace "downward-api-2071" to be "Succeeded or Failed" Jun 26 00:14:16.997: INFO: Pod "downward-api-4dcf461e-9538-40a8-9ec9-ce247529e46f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.988475ms Jun 26 00:14:19.055: INFO: Pod "downward-api-4dcf461e-9538-40a8-9ec9-ce247529e46f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061262732s Jun 26 00:14:21.060: INFO: Pod "downward-api-4dcf461e-9538-40a8-9ec9-ce247529e46f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065775841s STEP: Saw pod success Jun 26 00:14:21.060: INFO: Pod "downward-api-4dcf461e-9538-40a8-9ec9-ce247529e46f" satisfied condition "Succeeded or Failed" Jun 26 00:14:21.063: INFO: Trying to get logs from node latest-worker pod downward-api-4dcf461e-9538-40a8-9ec9-ce247529e46f container dapi-container: STEP: delete the pod Jun 26 00:14:21.104: INFO: Waiting for pod downward-api-4dcf461e-9538-40a8-9ec9-ce247529e46f to disappear Jun 26 00:14:21.134: INFO: Pod downward-api-4dcf461e-9538-40a8-9ec9-ce247529e46f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:14:21.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2071" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":294,"completed":116,"skipped":1758,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:14:21.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 26 00:14:21.270: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jun 26 00:14:26.274: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 26 00:14:26.274: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jun 26 00:14:28.278: INFO: Creating deployment "test-rollover-deployment" Jun 26 00:14:28.292: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jun 26 00:14:30.298: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jun 26 00:14:30.304: INFO: Ensure that both replica sets have 1 created replica Jun 26 00:14:30.310: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jun 26 00:14:30.316: INFO: Updating deployment test-rollover-deployment Jun 26 00:14:30.316: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jun 26 00:14:32.463: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jun 26 00:14:32.470: INFO: Make sure deployment "test-rollover-deployment" is complete Jun 26 00:14:32.475: INFO: all replica sets need to contain the pod-template-hash label Jun 26 00:14:32.475: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727268, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727268, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727270, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727268, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 26 00:14:34.659: INFO: all replica sets need to contain the pod-template-hash label Jun 26 00:14:34.659: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727268, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727268, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727274, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727268, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 26 00:14:36.484: INFO: all replica sets need to contain the pod-template-hash label Jun 26 00:14:36.484: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727268, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727268, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727274, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727268, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 26 00:14:38.484: INFO: all replica sets need to contain the pod-template-hash label Jun 26 00:14:38.484: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727268, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727268, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727274, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727268, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 26 00:14:40.483: INFO: all replica sets need to contain the pod-template-hash label Jun 26 00:14:40.483: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727268, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727268, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727274, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727268, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 26 00:14:42.483: INFO: all replica sets need to contain the pod-template-hash label Jun 26 00:14:42.483: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727268, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727268, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727274, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727268, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 26 00:14:44.492: INFO: Jun 26 00:14:44.492: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727268, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727268, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727284, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727268, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 26 00:14:46.483: INFO: Jun 26 00:14:46.483: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Jun 26 00:14:46.491: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-652 /apis/apps/v1/namespaces/deployment-652/deployments/test-rollover-deployment bbe8d5c7-8565-47c3-816b-f8e0df436bdf 15912855 2 2020-06-26 00:14:28 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-06-26 00:14:30 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-06-26 00:14:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0003c7a88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-06-26 00:14:28 +0000 UTC,LastTransitionTime:2020-06-26 00:14:28 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-7c4fd9c879" has successfully progressed.,LastUpdateTime:2020-06-26 00:14:44 +0000 UTC,LastTransitionTime:2020-06-26 00:14:28 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jun 26 00:14:46.494: INFO: New ReplicaSet "test-rollover-deployment-7c4fd9c879" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-7c4fd9c879 deployment-652 /apis/apps/v1/namespaces/deployment-652/replicasets/test-rollover-deployment-7c4fd9c879 dad23bf9-a05a-447f-b95d-886ddb7903b2 15912844 2 2020-06-26 00:14:30 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment bbe8d5c7-8565-47c3-816b-f8e0df436bdf 0xc004e88487 0xc004e88488}] [] [{kube-controller-manager Update apps/v1 2020-06-26 00:14:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bbe8d5c7-8565-47c3-816b-f8e0df436bdf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 7c4fd9c879,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004e88518 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jun 26 00:14:46.494: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jun 26 00:14:46.494: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-652 /apis/apps/v1/namespaces/deployment-652/replicasets/test-rollover-controller c599d99e-b4b4-417f-b0e9-651ab6cff1e2 15912854 2 2020-06-26 00:14:21 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment bbe8d5c7-8565-47c3-816b-f8e0df436bdf 0xc004e8825f 0xc004e88270}] [] [{e2e.test Update apps/v1 2020-06-26 00:14:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-06-26 00:14:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bbe8d5c7-8565-47c3-816b-f8e0df436bdf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004e88308 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 26 00:14:46.494: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5 deployment-652 /apis/apps/v1/namespaces/deployment-652/replicasets/test-rollover-deployment-5686c4cfd5 a6fb5009-c0dd-40f7-a30e-d8e324e84b2f 15912792 2 2020-06-26 00:14:28 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment bbe8d5c7-8565-47c3-816b-f8e0df436bdf 0xc004e88387 0xc004e88388}] [] [{kube-controller-manager Update apps/v1 2020-06-26 00:14:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bbe8d5c7-8565-47c3-816b-f8e0df436bdf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004e88418 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 26 00:14:46.496: INFO: Pod "test-rollover-deployment-7c4fd9c879-cnfst" is available: &Pod{ObjectMeta:{test-rollover-deployment-7c4fd9c879-cnfst test-rollover-deployment-7c4fd9c879- deployment-652 /api/v1/namespaces/deployment-652/pods/test-rollover-deployment-7c4fd9c879-cnfst 61cd4457-8767-4c64-b006-f10c8f9ea53d 15912812 0 2020-06-26 00:14:30 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [{apps/v1 ReplicaSet test-rollover-deployment-7c4fd9c879 dad23bf9-a05a-447f-b95d-886ddb7903b2 0xc004e88ad7 0xc004e88ad8}] [] [{kube-controller-manager Update v1 2020-06-26 00:14:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dad23bf9-a05a-447f-b95d-886ddb7903b2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-26 00:14:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.176\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ctd6s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ctd6s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ctd6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:14:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:14:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:14:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:14:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.176,StartTime:2020-06-26 00:14:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-26 00:14:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://7b93e49977f83ffd9e790f84627b1f1d568a880a1a09f1f2b13545dd052b4d5e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.176,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:14:46.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-652" for this suite. • [SLOW TEST:25.360 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":294,"completed":117,"skipped":1765,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:14:46.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 26 00:14:47.702: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 26 00:14:49.713: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727287, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727287, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727287, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727287, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 26 00:14:52.991: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 26 00:14:52.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6177-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:14:54.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4737" for this suite. STEP: Destroying namespace "webhook-4737-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.789 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":294,"completed":118,"skipped":1796,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:14:54.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions Jun 26 00:14:54.411: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config api-versions' Jun 26 00:14:54.811: INFO: stderr: "" Jun 26 00:14:54.811: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:14:54.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6523" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":294,"completed":119,"skipped":1802,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} S ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:14:54.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 26 00:14:54.914: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-9473 I0626 00:14:54.928628 8 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9473, replica count: 1 I0626 00:14:55.979098 8 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0626 00:14:56.979392 8 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0626 00:14:57.979680 8 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0626 00:14:58.979918 8 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 26 00:14:59.115: INFO: Created: latency-svc-ghzpc Jun 26 00:14:59.129: INFO: Got endpoints: latency-svc-ghzpc [49.303896ms] Jun 26 00:14:59.174: INFO: Created: latency-svc-ss8gp Jun 26 00:14:59.235: INFO: Got endpoints: latency-svc-ss8gp [105.473544ms] Jun 26 00:14:59.236: INFO: Created: latency-svc-z6qk4 Jun 26 00:14:59.247: INFO: Got endpoints: latency-svc-z6qk4 [117.822548ms] Jun 26 00:14:59.267: INFO: Created: latency-svc-9qz6q Jun 26 00:14:59.297: INFO: Got endpoints: latency-svc-9qz6q [167.809983ms] Jun 26 00:14:59.331: INFO: Created: latency-svc-6vj6n Jun 26 00:14:59.367: INFO: Got endpoints: latency-svc-6vj6n [237.457882ms] Jun 26 00:14:59.372: INFO: Created: latency-svc-nf44w Jun 26 00:14:59.391: INFO: Got endpoints: latency-svc-nf44w [261.568286ms] Jun 26 00:14:59.442: INFO: Created: latency-svc-4pkdq Jun 26 00:14:59.451: INFO: Got endpoints: latency-svc-4pkdq [321.332403ms] Jun 26 00:14:59.519: INFO: Created: latency-svc-mxt4p Jun 26 00:14:59.540: INFO: Got endpoints: latency-svc-mxt4p [410.74977ms] Jun 26 00:14:59.576: INFO: Created: latency-svc-8jx9h Jun 26 00:14:59.589: INFO: Got endpoints: latency-svc-8jx9h [459.504598ms] Jun 26 00:14:59.687: INFO: Created: latency-svc-84xxl Jun 26 00:14:59.703: INFO: Got endpoints: latency-svc-84xxl [574.210439ms] Jun 26 00:14:59.751: INFO: Created: latency-svc-5hm76 Jun 26 00:14:59.769: INFO: Got endpoints: latency-svc-5hm76 [639.556433ms] Jun 26 00:14:59.891: INFO: Created: latency-svc-6wtl7 Jun 26 00:14:59.938: INFO: Got endpoints: latency-svc-6wtl7 [808.633986ms] Jun 26 00:14:59.984: INFO: Created: latency-svc-jdr58 Jun 26 00:14:59.997: INFO: Got endpoints: latency-svc-jdr58 [867.984288ms] Jun 26 00:15:00.062: INFO: Created: latency-svc-ktthr Jun 26 00:15:00.075: INFO: Got endpoints: latency-svc-ktthr [945.873262ms] Jun 26 00:15:00.125: INFO: Created: latency-svc-6ww7d Jun 26 00:15:00.136: INFO: Got endpoints: latency-svc-6ww7d [1.006421525s] Jun 26 00:15:00.161: INFO: Created: latency-svc-gzzjr Jun 26 00:15:00.226: INFO: Got endpoints: latency-svc-gzzjr [1.09603735s] Jun 26 00:15:00.293: INFO: Created: latency-svc-8tmvj Jun 26 00:15:00.306: INFO: Got endpoints: latency-svc-8tmvj [1.070912231s] Jun 26 00:15:00.413: INFO: Created: latency-svc-sbkz4 Jun 26 00:15:00.426: INFO: Got endpoints: latency-svc-sbkz4 [1.17879489s] Jun 26 00:15:00.510: INFO: Created: latency-svc-7fzch Jun 26 00:15:00.523: INFO: Got endpoints: latency-svc-7fzch [1.225541381s] Jun 26 00:15:00.551: INFO: Created: latency-svc-h266f Jun 26 00:15:00.575: INFO: Got endpoints: latency-svc-h266f [1.208749689s] Jun 26 00:15:00.654: INFO: Created: latency-svc-hlf8f Jun 26 00:15:00.666: INFO: Got endpoints: latency-svc-hlf8f [1.27572066s] Jun 26 00:15:00.692: INFO: Created: latency-svc-tskdc Jun 26 00:15:00.702: INFO: Got endpoints: latency-svc-tskdc [1.251806997s] Jun 26 00:15:00.743: INFO: Created: latency-svc-kk54b Jun 26 00:15:00.792: INFO: Got endpoints: latency-svc-kk54b [1.251637815s] Jun 26 00:15:00.833: INFO: Created: latency-svc-5plld Jun 26 00:15:00.872: INFO: Got endpoints: latency-svc-5plld [1.283418551s] Jun 26 00:15:00.947: INFO: Created: latency-svc-4mrsn Jun 26 00:15:00.955: INFO: Got endpoints: latency-svc-4mrsn [1.251880526s] Jun 26 00:15:00.995: INFO: Created: latency-svc-fszvm Jun 26 00:15:01.010: INFO: Got endpoints: latency-svc-fszvm [1.241166538s] Jun 26 00:15:01.038: INFO: Created: latency-svc-r5frn Jun 26 00:15:01.109: INFO: Got endpoints: latency-svc-r5frn [1.171224383s] Jun 26 00:15:01.148: INFO: Created: latency-svc-9w9mr Jun 26 00:15:01.168: INFO: Got endpoints: latency-svc-9w9mr [1.170803676s] Jun 26 00:15:01.262: INFO: Created: latency-svc-dh7f2 Jun 26 00:15:01.289: INFO: Created: latency-svc-l48kq Jun 26 00:15:01.289: INFO: Got endpoints: latency-svc-dh7f2 [1.213768152s] Jun 26 00:15:01.319: INFO: Got endpoints: latency-svc-l48kq [1.183058479s] Jun 26 00:15:01.452: INFO: Created: latency-svc-87s9l Jun 26 00:15:01.457: INFO: Got endpoints: latency-svc-87s9l [1.230951438s] Jun 26 00:15:01.491: INFO: Created: latency-svc-d9kxf Jun 26 00:15:01.503: INFO: Got endpoints: latency-svc-d9kxf [1.197000967s] Jun 26 00:15:01.643: INFO: Created: latency-svc-s98rh Jun 26 00:15:01.648: INFO: Got endpoints: latency-svc-s98rh [1.222626595s] Jun 26 00:15:01.696: INFO: Created: latency-svc-crf94 Jun 26 00:15:01.707: INFO: Got endpoints: latency-svc-crf94 [1.184613673s] Jun 26 00:15:01.786: INFO: Created: latency-svc-2cqxw Jun 26 00:15:01.790: INFO: Got endpoints: latency-svc-2cqxw [1.215026542s] Jun 26 00:15:01.822: INFO: Created: latency-svc-749j5 Jun 26 00:15:01.847: INFO: Got endpoints: latency-svc-749j5 [1.180105597s] Jun 26 00:15:01.948: INFO: Created: latency-svc-2bzrn Jun 26 00:15:01.960: INFO: Got endpoints: latency-svc-2bzrn [1.257386034s] Jun 26 00:15:02.039: INFO: Created: latency-svc-wnq6s Jun 26 00:15:02.085: INFO: Got endpoints: latency-svc-wnq6s [1.293420008s] Jun 26 00:15:02.123: INFO: Created: latency-svc-gcztz Jun 26 00:15:02.140: INFO: Got endpoints: latency-svc-gcztz [1.267741549s] Jun 26 00:15:02.242: INFO: Created: latency-svc-v49sk Jun 26 00:15:02.245: INFO: Got endpoints: latency-svc-v49sk [1.290002152s] Jun 26 00:15:02.273: INFO: Created: latency-svc-866sk Jun 26 00:15:02.303: INFO: Got endpoints: latency-svc-866sk [1.292562552s] Jun 26 00:15:02.384: INFO: Created: latency-svc-mngbg Jun 26 00:15:02.394: INFO: Got endpoints: latency-svc-mngbg [1.284632289s] Jun 26 00:15:02.415: INFO: Created: latency-svc-vbqdw Jun 26 00:15:02.432: INFO: Got endpoints: latency-svc-vbqdw [1.26369713s] Jun 26 00:15:02.459: INFO: Created: latency-svc-hxg8w Jun 26 00:15:02.471: INFO: Got endpoints: latency-svc-hxg8w [1.181710303s] Jun 26 00:15:02.561: INFO: Created: latency-svc-fft29 Jun 26 00:15:02.567: INFO: Got endpoints: latency-svc-fft29 [1.247895378s] Jun 26 00:15:02.587: INFO: Created: latency-svc-qcrv4 Jun 26 00:15:02.604: INFO: Got endpoints: latency-svc-qcrv4 [1.14757816s] Jun 26 00:15:02.624: INFO: Created: latency-svc-s8drh Jun 26 00:15:02.633: INFO: Got endpoints: latency-svc-s8drh [1.130611633s] Jun 26 00:15:02.708: INFO: Created: latency-svc-srmpg Jun 26 00:15:02.718: INFO: Got endpoints: latency-svc-srmpg [1.069250432s] Jun 26 00:15:02.765: INFO: Created: latency-svc-98759 Jun 26 00:15:02.779: INFO: Got endpoints: latency-svc-98759 [1.071555732s] Jun 26 00:15:02.858: INFO: Created: latency-svc-67p6v Jun 26 00:15:02.890: INFO: Got endpoints: latency-svc-67p6v [1.100056183s] Jun 26 00:15:02.926: INFO: Created: latency-svc-2w2s4 Jun 26 00:15:02.941: INFO: Got endpoints: latency-svc-2w2s4 [1.09428229s] Jun 26 00:15:03.034: INFO: Created: latency-svc-nw858 Jun 26 00:15:03.056: INFO: Got endpoints: latency-svc-nw858 [1.09623353s] Jun 26 00:15:03.073: INFO: Created: latency-svc-cpc5v Jun 26 00:15:03.109: INFO: Got endpoints: latency-svc-cpc5v [1.02316809s] Jun 26 00:15:03.190: INFO: Created: latency-svc-gg4sn Jun 26 00:15:03.205: INFO: Got endpoints: latency-svc-gg4sn [1.065208671s] Jun 26 00:15:03.226: INFO: Created: latency-svc-k45vs Jun 26 00:15:03.242: INFO: Got endpoints: latency-svc-k45vs [996.49834ms] Jun 26 00:15:03.262: INFO: Created: latency-svc-pxnpq Jun 26 00:15:03.313: INFO: Got endpoints: latency-svc-pxnpq [1.009746352s] Jun 26 00:15:03.325: INFO: Created: latency-svc-87jpt Jun 26 00:15:03.350: INFO: Got endpoints: latency-svc-87jpt [956.20916ms] Jun 26 00:15:03.380: INFO: Created: latency-svc-z8zgp Jun 26 00:15:03.392: INFO: Got endpoints: latency-svc-z8zgp [960.107886ms] Jun 26 00:15:03.412: INFO: Created: latency-svc-88p68 Jun 26 00:15:03.468: INFO: Got endpoints: latency-svc-88p68 [997.244594ms] Jun 26 00:15:03.483: INFO: Created: latency-svc-x28xl Jun 26 00:15:03.500: INFO: Got endpoints: latency-svc-x28xl [933.466886ms] Jun 26 00:15:03.526: INFO: Created: latency-svc-wgq94 Jun 26 00:15:03.543: INFO: Got endpoints: latency-svc-wgq94 [938.572439ms] Jun 26 00:15:03.566: INFO: Created: latency-svc-7cw9g Jun 26 00:15:03.648: INFO: Got endpoints: latency-svc-7cw9g [1.014958086s] Jun 26 00:15:03.664: INFO: Created: latency-svc-9pxmj Jun 26 00:15:03.681: INFO: Got endpoints: latency-svc-9pxmj [963.414977ms] Jun 26 00:15:03.718: INFO: Created: latency-svc-fgxss Jun 26 00:15:03.736: INFO: Got endpoints: latency-svc-fgxss [956.884756ms] Jun 26 00:15:03.822: INFO: Created: latency-svc-mkpfw Jun 26 00:15:03.826: INFO: Got endpoints: latency-svc-mkpfw [935.846022ms] Jun 26 00:15:03.874: INFO: Created: latency-svc-22sn2 Jun 26 00:15:03.886: INFO: Got endpoints: latency-svc-22sn2 [944.963127ms] Jun 26 00:15:03.904: INFO: Created: latency-svc-9dmn7 Jun 26 00:15:03.916: INFO: Got endpoints: latency-svc-9dmn7 [859.886171ms] Jun 26 00:15:04.013: INFO: Created: latency-svc-dxjhg Jun 26 00:15:04.017: INFO: Got endpoints: latency-svc-dxjhg [908.252688ms] Jun 26 00:15:04.054: INFO: Created: latency-svc-5cs9d Jun 26 00:15:04.067: INFO: Got endpoints: latency-svc-5cs9d [861.224034ms] Jun 26 00:15:04.084: INFO: Created: latency-svc-kr9qr Jun 26 00:15:04.097: INFO: Got endpoints: latency-svc-kr9qr [854.500448ms] Jun 26 00:15:04.181: INFO: Created: latency-svc-d2w2f Jun 26 00:15:04.193: INFO: Got endpoints: latency-svc-d2w2f [880.846548ms] Jun 26 00:15:04.214: INFO: Created: latency-svc-brp8t Jun 26 00:15:04.224: INFO: Got endpoints: latency-svc-brp8t [873.421244ms] Jun 26 00:15:04.246: INFO: Created: latency-svc-2mmxt Jun 26 00:15:04.260: INFO: Got endpoints: latency-svc-2mmxt [867.88591ms] Jun 26 00:15:04.367: INFO: Created: latency-svc-4s2cf Jun 26 00:15:04.392: INFO: Got endpoints: latency-svc-4s2cf [923.816012ms] Jun 26 00:15:04.418: INFO: Created: latency-svc-8qsn9 Jun 26 00:15:04.435: INFO: Got endpoints: latency-svc-8qsn9 [934.075038ms] Jun 26 00:15:04.460: INFO: Created: latency-svc-7r67v Jun 26 00:15:04.528: INFO: Got endpoints: latency-svc-7r67v [985.213195ms] Jun 26 00:15:04.532: INFO: Created: latency-svc-pq7wb Jun 26 00:15:04.541: INFO: Got endpoints: latency-svc-pq7wb [892.680683ms] Jun 26 00:15:04.570: INFO: Created: latency-svc-lgzjw Jun 26 00:15:04.583: INFO: Got endpoints: latency-svc-lgzjw [901.97931ms] Jun 26 00:15:04.606: INFO: Created: latency-svc-tl2gb Jun 26 00:15:04.620: INFO: Got endpoints: latency-svc-tl2gb [883.876933ms] Jun 26 00:15:04.672: INFO: Created: latency-svc-7m9cr Jun 26 00:15:04.676: INFO: Got endpoints: latency-svc-7m9cr [849.164202ms] Jun 26 00:15:04.718: INFO: Created: latency-svc-7xj66 Jun 26 00:15:04.741: INFO: Got endpoints: latency-svc-7xj66 [855.037016ms] Jun 26 00:15:04.762: INFO: Created: latency-svc-kdmp5 Jun 26 00:15:04.827: INFO: Got endpoints: latency-svc-kdmp5 [911.256904ms] Jun 26 00:15:04.831: INFO: Created: latency-svc-k9gf4 Jun 26 00:15:04.837: INFO: Got endpoints: latency-svc-k9gf4 [820.024424ms] Jun 26 00:15:04.873: INFO: Created: latency-svc-xp9ff Jun 26 00:15:04.897: INFO: Got endpoints: latency-svc-xp9ff [830.600974ms] Jun 26 00:15:04.915: INFO: Created: latency-svc-55psz Jun 26 00:15:04.959: INFO: Got endpoints: latency-svc-55psz [862.297555ms] Jun 26 00:15:04.990: INFO: Created: latency-svc-t82k5 Jun 26 00:15:05.000: INFO: Got endpoints: latency-svc-t82k5 [806.549746ms] Jun 26 00:15:05.020: INFO: Created: latency-svc-bk8w5 Jun 26 00:15:05.031: INFO: Got endpoints: latency-svc-bk8w5 [806.878633ms] Jun 26 00:15:05.050: INFO: Created: latency-svc-9j89n Jun 26 00:15:05.127: INFO: Got endpoints: latency-svc-9j89n [867.079458ms] Jun 26 00:15:05.131: INFO: Created: latency-svc-2brgm Jun 26 00:15:05.149: INFO: Got endpoints: latency-svc-2brgm [757.405465ms] Jun 26 00:15:05.182: INFO: Created: latency-svc-fh7rc Jun 26 00:15:05.199: INFO: Got endpoints: latency-svc-fh7rc [764.195568ms] Jun 26 00:15:05.221: INFO: Created: latency-svc-cz72p Jun 26 00:15:05.271: INFO: Got endpoints: latency-svc-cz72p [742.471326ms] Jun 26 00:15:05.284: INFO: Created: latency-svc-dnkv4 Jun 26 00:15:05.318: INFO: Got endpoints: latency-svc-dnkv4 [776.575497ms] Jun 26 00:15:05.354: INFO: Created: latency-svc-v2cbw Jun 26 00:15:05.367: INFO: Got endpoints: latency-svc-v2cbw [784.096261ms] Jun 26 00:15:05.420: INFO: Created: latency-svc-sn56l Jun 26 00:15:05.423: INFO: Got endpoints: latency-svc-sn56l [803.305544ms] Jun 26 00:15:05.476: INFO: Created: latency-svc-wtcfs Jun 26 00:15:05.500: INFO: Got endpoints: latency-svc-wtcfs [824.121137ms] Jun 26 00:15:05.559: INFO: Created: latency-svc-zqbl5 Jun 26 00:15:05.569: INFO: Got endpoints: latency-svc-zqbl5 [827.740427ms] Jun 26 00:15:05.623: INFO: Created: latency-svc-9v5fx Jun 26 00:15:05.633: INFO: Got endpoints: latency-svc-9v5fx [805.103851ms] Jun 26 00:15:05.702: INFO: Created: latency-svc-c7x2d Jun 26 00:15:05.715: INFO: Got endpoints: latency-svc-c7x2d [878.25245ms] Jun 26 00:15:05.770: INFO: Created: latency-svc-b7hgv Jun 26 00:15:05.783: INFO: Got endpoints: latency-svc-b7hgv [885.400898ms] Jun 26 00:15:05.846: INFO: Created: latency-svc-2p4zk Jun 26 00:15:05.849: INFO: Got endpoints: latency-svc-2p4zk [890.36195ms] Jun 26 00:15:05.887: INFO: Created: latency-svc-ltrph Jun 26 00:15:05.926: INFO: Got endpoints: latency-svc-ltrph [925.620534ms] Jun 26 00:15:05.995: INFO: Created: latency-svc-q45f2 Jun 26 00:15:06.006: INFO: Got endpoints: latency-svc-q45f2 [975.744317ms] Jun 26 00:15:06.025: INFO: Created: latency-svc-q7pjd Jun 26 00:15:06.037: INFO: Got endpoints: latency-svc-q7pjd [909.724646ms] Jun 26 00:15:06.073: INFO: Created: latency-svc-stzgl Jun 26 00:15:06.174: INFO: Got endpoints: latency-svc-stzgl [1.025012346s] Jun 26 00:15:06.190: INFO: Created: latency-svc-wxhgf Jun 26 00:15:06.205: INFO: Got endpoints: latency-svc-wxhgf [1.006634371s] Jun 26 00:15:06.254: INFO: Created: latency-svc-zhk5p Jun 26 00:15:06.266: INFO: Got endpoints: latency-svc-zhk5p [995.186699ms] Jun 26 00:15:06.331: INFO: Created: latency-svc-zklpp Jun 26 00:15:06.351: INFO: Got endpoints: latency-svc-zklpp [1.033425761s] Jun 26 00:15:06.394: INFO: Created: latency-svc-bljhc Jun 26 00:15:06.411: INFO: Got endpoints: latency-svc-bljhc [1.043434315s] Jun 26 00:15:06.429: INFO: Created: latency-svc-bh69l Jun 26 00:15:06.487: INFO: Got endpoints: latency-svc-bh69l [1.063745032s] Jun 26 00:15:06.498: INFO: Created: latency-svc-q4khw Jun 26 00:15:06.529: INFO: Got endpoints: latency-svc-q4khw [1.029330865s] Jun 26 00:15:06.561: INFO: Created: latency-svc-dbkrh Jun 26 00:15:06.579: INFO: Got endpoints: latency-svc-dbkrh [1.010088068s] Jun 26 00:15:06.632: INFO: Created: latency-svc-lhs4b Jun 26 00:15:06.633: INFO: Got endpoints: latency-svc-lhs4b [1.00059479s] Jun 26 00:15:06.669: INFO: Created: latency-svc-5k668 Jun 26 00:15:06.682: INFO: Got endpoints: latency-svc-5k668 [966.221897ms] Jun 26 00:15:06.703: INFO: Created: latency-svc-qhr7s Jun 26 00:15:06.718: INFO: Got endpoints: latency-svc-qhr7s [935.441485ms] Jun 26 00:15:06.770: INFO: Created: latency-svc-99t72 Jun 26 00:15:06.782: INFO: Got endpoints: latency-svc-99t72 [932.069887ms] Jun 26 00:15:06.810: INFO: Created: latency-svc-28phl Jun 26 00:15:06.820: INFO: Got endpoints: latency-svc-28phl [894.448819ms] Jun 26 00:15:06.849: INFO: Created: latency-svc-vz4bh Jun 26 00:15:06.862: INFO: Got endpoints: latency-svc-vz4bh [855.981261ms] Jun 26 00:15:06.935: INFO: Created: latency-svc-qpv2x Jun 26 00:15:06.947: INFO: Got endpoints: latency-svc-qpv2x [909.934275ms] Jun 26 00:15:06.966: INFO: Created: latency-svc-r4b55 Jun 26 00:15:06.983: INFO: Got endpoints: latency-svc-r4b55 [808.706963ms] Jun 26 00:15:07.003: INFO: Created: latency-svc-vllzn Jun 26 00:15:07.029: INFO: Got endpoints: latency-svc-vllzn [823.787561ms] Jun 26 00:15:07.122: INFO: Created: latency-svc-564p7 Jun 26 00:15:07.127: INFO: Got endpoints: latency-svc-564p7 [860.829495ms] Jun 26 00:15:07.183: INFO: Created: latency-svc-tdks6 Jun 26 00:15:07.194: INFO: Got endpoints: latency-svc-tdks6 [843.124459ms] Jun 26 00:15:07.223: INFO: Created: latency-svc-gnsg6 Jun 26 00:15:07.258: INFO: Got endpoints: latency-svc-gnsg6 [847.526496ms] Jun 26 00:15:07.281: INFO: Created: latency-svc-pjrqp Jun 26 00:15:07.295: INFO: Got endpoints: latency-svc-pjrqp [808.425392ms] Jun 26 00:15:07.323: INFO: Created: latency-svc-x2vgg Jun 26 00:15:07.337: INFO: Got endpoints: latency-svc-x2vgg [808.065009ms] Jun 26 00:15:07.390: INFO: Created: latency-svc-g27lv Jun 26 00:15:07.410: INFO: Got endpoints: latency-svc-g27lv [831.29141ms] Jun 26 00:15:07.441: INFO: Created: latency-svc-cx4zp Jun 26 00:15:07.459: INFO: Got endpoints: latency-svc-cx4zp [825.131125ms] Jun 26 00:15:07.528: INFO: Created: latency-svc-5x67f Jun 26 00:15:07.557: INFO: Got endpoints: latency-svc-5x67f [875.46537ms] Jun 26 00:15:07.557: INFO: Created: latency-svc-dg6tm Jun 26 00:15:07.578: INFO: Got endpoints: latency-svc-dg6tm [859.990005ms] Jun 26 00:15:07.603: INFO: Created: latency-svc-q8t28 Jun 26 00:15:07.615: INFO: Got endpoints: latency-svc-q8t28 [832.908816ms] Jun 26 00:15:07.696: INFO: Created: latency-svc-6n527 Jun 26 00:15:07.744: INFO: Created: latency-svc-pqlzz Jun 26 00:15:07.744: INFO: Got endpoints: latency-svc-6n527 [923.292036ms] Jun 26 00:15:07.771: INFO: Got endpoints: latency-svc-pqlzz [908.461877ms] Jun 26 00:15:07.834: INFO: Created: latency-svc-srszz Jun 26 00:15:07.854: INFO: Got endpoints: latency-svc-srszz [906.958397ms] Jun 26 00:15:07.873: INFO: Created: latency-svc-7d47m Jun 26 00:15:07.892: INFO: Got endpoints: latency-svc-7d47m [908.591248ms] Jun 26 00:15:07.977: INFO: Created: latency-svc-29l2w Jun 26 00:15:08.013: INFO: Created: latency-svc-8lz8m Jun 26 00:15:08.013: INFO: Got endpoints: latency-svc-29l2w [983.831172ms] Jun 26 00:15:08.047: INFO: Got endpoints: latency-svc-8lz8m [919.715562ms] Jun 26 00:15:08.077: INFO: Created: latency-svc-qz2xk Jun 26 00:15:08.122: INFO: Got endpoints: latency-svc-qz2xk [927.488128ms] Jun 26 00:15:08.145: INFO: Created: latency-svc-gvdjk Jun 26 00:15:08.163: INFO: Got endpoints: latency-svc-gvdjk [904.1596ms] Jun 26 00:15:08.188: INFO: Created: latency-svc-bx7j9 Jun 26 00:15:08.205: INFO: Got endpoints: latency-svc-bx7j9 [909.75823ms] Jun 26 00:15:08.253: INFO: Created: latency-svc-tt5ld Jun 26 00:15:08.259: INFO: Got endpoints: latency-svc-tt5ld [921.744801ms] Jun 26 00:15:08.297: INFO: Created: latency-svc-s7znd Jun 26 00:15:08.314: INFO: Got endpoints: latency-svc-s7znd [903.301706ms] Jun 26 00:15:08.343: INFO: Created: latency-svc-v7kck Jun 26 00:15:08.384: INFO: Got endpoints: latency-svc-v7kck [925.636304ms] Jun 26 00:15:08.409: INFO: Created: latency-svc-8rgh5 Jun 26 00:15:08.422: INFO: Got endpoints: latency-svc-8rgh5 [864.644865ms] Jun 26 00:15:08.445: INFO: Created: latency-svc-7bplq Jun 26 00:15:08.458: INFO: Got endpoints: latency-svc-7bplq [879.865659ms] Jun 26 00:15:08.542: INFO: Created: latency-svc-ms72j Jun 26 00:15:08.548: INFO: Got endpoints: latency-svc-ms72j [933.884006ms] Jun 26 00:15:08.575: INFO: Created: latency-svc-dpsxc Jun 26 00:15:08.607: INFO: Got endpoints: latency-svc-dpsxc [863.718286ms] Jun 26 00:15:08.638: INFO: Created: latency-svc-bkxwn Jun 26 00:15:08.684: INFO: Got endpoints: latency-svc-bkxwn [913.053543ms] Jun 26 00:15:08.697: INFO: Created: latency-svc-wk2wv Jun 26 00:15:08.712: INFO: Got endpoints: latency-svc-wk2wv [858.261938ms] Jun 26 00:15:08.730: INFO: Created: latency-svc-x9fbv Jun 26 00:15:08.755: INFO: Got endpoints: latency-svc-x9fbv [862.612511ms] Jun 26 00:15:08.816: INFO: Created: latency-svc-vxtkh Jun 26 00:15:08.819: INFO: Got endpoints: latency-svc-vxtkh [805.861977ms] Jun 26 00:15:08.883: INFO: Created: latency-svc-8mjt6 Jun 26 00:15:08.899: INFO: Got endpoints: latency-svc-8mjt6 [852.293481ms] Jun 26 00:15:08.978: INFO: Created: latency-svc-7j6xp Jun 26 00:15:09.007: INFO: Got endpoints: latency-svc-7j6xp [884.813958ms] Jun 26 00:15:09.030: INFO: Created: latency-svc-pg774 Jun 26 00:15:09.043: INFO: Got endpoints: latency-svc-pg774 [880.524555ms] Jun 26 00:15:09.061: INFO: Created: latency-svc-bth9k Jun 26 00:15:09.139: INFO: Got endpoints: latency-svc-bth9k [933.563338ms] Jun 26 00:15:09.142: INFO: Created: latency-svc-6bbng Jun 26 00:15:09.171: INFO: Got endpoints: latency-svc-6bbng [912.050262ms] Jun 26 00:15:09.207: INFO: Created: latency-svc-dvj2b Jun 26 00:15:09.228: INFO: Got endpoints: latency-svc-dvj2b [914.306999ms] Jun 26 00:15:09.282: INFO: Created: latency-svc-n64tf Jun 26 00:15:09.290: INFO: Got endpoints: latency-svc-n64tf [905.979648ms] Jun 26 00:15:09.340: INFO: Created: latency-svc-hkhld Jun 26 00:15:09.351: INFO: Got endpoints: latency-svc-hkhld [928.901365ms] Jun 26 00:15:09.372: INFO: Created: latency-svc-bv67w Jun 26 00:15:09.432: INFO: Got endpoints: latency-svc-bv67w [973.77739ms] Jun 26 00:15:09.471: INFO: Created: latency-svc-45ztd Jun 26 00:15:09.489: INFO: Got endpoints: latency-svc-45ztd [940.815506ms] Jun 26 00:15:09.595: INFO: Created: latency-svc-rcv2z Jun 26 00:15:09.598: INFO: Got endpoints: latency-svc-rcv2z [990.455953ms] Jun 26 00:15:09.693: INFO: Created: latency-svc-llzth Jun 26 00:15:09.738: INFO: Got endpoints: latency-svc-llzth [1.053682891s] Jun 26 00:15:09.759: INFO: Created: latency-svc-pgxfk Jun 26 00:15:09.773: INFO: Got endpoints: latency-svc-pgxfk [1.06096003s] Jun 26 00:15:09.793: INFO: Created: latency-svc-66rsb Jun 26 00:15:09.809: INFO: Got endpoints: latency-svc-66rsb [1.054249257s] Jun 26 00:15:09.894: INFO: Created: latency-svc-cvgbq Jun 26 00:15:09.910: INFO: Got endpoints: latency-svc-cvgbq [1.091190748s] Jun 26 00:15:09.939: INFO: Created: latency-svc-8vdkz Jun 26 00:15:09.953: INFO: Got endpoints: latency-svc-8vdkz [1.053799984s] Jun 26 00:15:09.975: INFO: Created: latency-svc-hxrtz Jun 26 00:15:09.989: INFO: Got endpoints: latency-svc-hxrtz [982.276751ms] Jun 26 00:15:10.056: INFO: Created: latency-svc-4fgmg Jun 26 00:15:10.067: INFO: Got endpoints: latency-svc-4fgmg [1.023634986s] Jun 26 00:15:10.086: INFO: Created: latency-svc-794jj Jun 26 00:15:10.102: INFO: Got endpoints: latency-svc-794jj [963.477886ms] Jun 26 00:15:10.123: INFO: Created: latency-svc-6pphv Jun 26 00:15:10.139: INFO: Got endpoints: latency-svc-6pphv [967.698857ms] Jun 26 00:15:10.203: INFO: Created: latency-svc-5dxv9 Jun 26 00:15:10.223: INFO: Got endpoints: latency-svc-5dxv9 [994.701105ms] Jun 26 00:15:10.245: INFO: Created: latency-svc-lgzx2 Jun 26 00:15:10.253: INFO: Got endpoints: latency-svc-lgzx2 [962.236626ms] Jun 26 00:15:10.272: INFO: Created: latency-svc-2pckw Jun 26 00:15:10.289: INFO: Got endpoints: latency-svc-2pckw [938.655401ms] Jun 26 00:15:10.339: INFO: Created: latency-svc-c2hws Jun 26 00:15:10.340: INFO: Got endpoints: latency-svc-c2hws [908.120078ms] Jun 26 00:15:10.383: INFO: Created: latency-svc-jpzvm Jun 26 00:15:10.404: INFO: Got endpoints: latency-svc-jpzvm [914.69228ms] Jun 26 00:15:10.431: INFO: Created: latency-svc-2t6h9 Jun 26 00:15:10.492: INFO: Got endpoints: latency-svc-2t6h9 [894.338216ms] Jun 26 00:15:10.506: INFO: Created: latency-svc-llh99 Jun 26 00:15:10.518: INFO: Got endpoints: latency-svc-llh99 [780.450271ms] Jun 26 00:15:10.539: INFO: Created: latency-svc-hg5q8 Jun 26 00:15:10.548: INFO: Got endpoints: latency-svc-hg5q8 [775.161498ms] Jun 26 00:15:10.566: INFO: Created: latency-svc-7gvsv Jun 26 00:15:10.654: INFO: Got endpoints: latency-svc-7gvsv [845.385977ms] Jun 26 00:15:10.656: INFO: Created: latency-svc-xn26h Jun 26 00:15:10.663: INFO: Got endpoints: latency-svc-xn26h [752.923413ms] Jun 26 00:15:10.689: INFO: Created: latency-svc-9d2bk Jun 26 00:15:10.699: INFO: Got endpoints: latency-svc-9d2bk [746.432578ms] Jun 26 00:15:10.722: INFO: Created: latency-svc-ccf4v Jun 26 00:15:10.752: INFO: Got endpoints: latency-svc-ccf4v [762.930239ms] Jun 26 00:15:10.809: INFO: Created: latency-svc-fq6g8 Jun 26 00:15:10.862: INFO: Got endpoints: latency-svc-fq6g8 [795.479081ms] Jun 26 00:15:10.886: INFO: Created: latency-svc-mbglw Jun 26 00:15:10.905: INFO: Got endpoints: latency-svc-mbglw [802.208528ms] Jun 26 00:15:10.953: INFO: Created: latency-svc-86j27 Jun 26 00:15:10.980: INFO: Got endpoints: latency-svc-86j27 [841.186838ms] Jun 26 00:15:11.010: INFO: Created: latency-svc-9lpkm Jun 26 00:15:11.025: INFO: Got endpoints: latency-svc-9lpkm [802.674565ms] Jun 26 00:15:11.042: INFO: Created: latency-svc-6scjv Jun 26 00:15:11.098: INFO: Got endpoints: latency-svc-6scjv [845.392751ms] Jun 26 00:15:11.120: INFO: Created: latency-svc-5pk87 Jun 26 00:15:11.154: INFO: Got endpoints: latency-svc-5pk87 [864.774141ms] Jun 26 00:15:11.184: INFO: Created: latency-svc-d9bfd Jun 26 00:15:11.235: INFO: Got endpoints: latency-svc-d9bfd [894.482274ms] Jun 26 00:15:11.238: INFO: Created: latency-svc-6gdbr Jun 26 00:15:11.260: INFO: Got endpoints: latency-svc-6gdbr [855.920424ms] Jun 26 00:15:11.289: INFO: Created: latency-svc-hvmwt Jun 26 00:15:11.313: INFO: Got endpoints: latency-svc-hvmwt [820.868517ms] Jun 26 00:15:11.360: INFO: Created: latency-svc-rvktf Jun 26 00:15:11.407: INFO: Got endpoints: latency-svc-rvktf [888.31983ms] Jun 26 00:15:11.407: INFO: Created: latency-svc-g5sgj Jun 26 00:15:11.443: INFO: Got endpoints: latency-svc-g5sgj [894.551609ms] Jun 26 00:15:11.516: INFO: Created: latency-svc-mhwmb Jun 26 00:15:11.520: INFO: Got endpoints: latency-svc-mhwmb [865.431739ms] Jun 26 00:15:11.552: INFO: Created: latency-svc-cb5j6 Jun 26 00:15:11.568: INFO: Got endpoints: latency-svc-cb5j6 [904.969503ms] Jun 26 00:15:11.595: INFO: Created: latency-svc-gwklm Jun 26 00:15:11.612: INFO: Got endpoints: latency-svc-gwklm [912.835382ms] Jun 26 00:15:11.690: INFO: Created: latency-svc-qk7lz Jun 26 00:15:11.718: INFO: Got endpoints: latency-svc-qk7lz [965.739425ms] Jun 26 00:15:11.742: INFO: Created: latency-svc-msc4h Jun 26 00:15:11.755: INFO: Got endpoints: latency-svc-msc4h [892.057389ms] Jun 26 00:15:11.775: INFO: Created: latency-svc-dz6rn Jun 26 00:15:11.852: INFO: Got endpoints: latency-svc-dz6rn [947.196392ms] Jun 26 00:15:11.859: INFO: Created: latency-svc-vzxp7 Jun 26 00:15:11.886: INFO: Got endpoints: latency-svc-vzxp7 [906.101239ms] Jun 26 00:15:11.922: INFO: Created: latency-svc-948s9 Jun 26 00:15:11.983: INFO: Got endpoints: latency-svc-948s9 [957.631312ms] Jun 26 00:15:11.983: INFO: Latencies: [105.473544ms 117.822548ms 167.809983ms 237.457882ms 261.568286ms 321.332403ms 410.74977ms 459.504598ms 574.210439ms 639.556433ms 742.471326ms 746.432578ms 752.923413ms 757.405465ms 762.930239ms 764.195568ms 775.161498ms 776.575497ms 780.450271ms 784.096261ms 795.479081ms 802.208528ms 802.674565ms 803.305544ms 805.103851ms 805.861977ms 806.549746ms 806.878633ms 808.065009ms 808.425392ms 808.633986ms 808.706963ms 820.024424ms 820.868517ms 823.787561ms 824.121137ms 825.131125ms 827.740427ms 830.600974ms 831.29141ms 832.908816ms 841.186838ms 843.124459ms 845.385977ms 845.392751ms 847.526496ms 849.164202ms 852.293481ms 854.500448ms 855.037016ms 855.920424ms 855.981261ms 858.261938ms 859.886171ms 859.990005ms 860.829495ms 861.224034ms 862.297555ms 862.612511ms 863.718286ms 864.644865ms 864.774141ms 865.431739ms 867.079458ms 867.88591ms 867.984288ms 873.421244ms 875.46537ms 878.25245ms 879.865659ms 880.524555ms 880.846548ms 883.876933ms 884.813958ms 885.400898ms 888.31983ms 890.36195ms 892.057389ms 892.680683ms 894.338216ms 894.448819ms 894.482274ms 894.551609ms 901.97931ms 903.301706ms 904.1596ms 904.969503ms 905.979648ms 906.101239ms 906.958397ms 908.120078ms 908.252688ms 908.461877ms 908.591248ms 909.724646ms 909.75823ms 909.934275ms 911.256904ms 912.050262ms 912.835382ms 913.053543ms 914.306999ms 914.69228ms 919.715562ms 921.744801ms 923.292036ms 923.816012ms 925.620534ms 925.636304ms 927.488128ms 928.901365ms 932.069887ms 933.466886ms 933.563338ms 933.884006ms 934.075038ms 935.441485ms 935.846022ms 938.572439ms 938.655401ms 940.815506ms 944.963127ms 945.873262ms 947.196392ms 956.20916ms 956.884756ms 957.631312ms 960.107886ms 962.236626ms 963.414977ms 963.477886ms 965.739425ms 966.221897ms 967.698857ms 973.77739ms 975.744317ms 982.276751ms 983.831172ms 985.213195ms 990.455953ms 994.701105ms 995.186699ms 996.49834ms 997.244594ms 1.00059479s 1.006421525s 1.006634371s 1.009746352s 1.010088068s 1.014958086s 1.02316809s 1.023634986s 1.025012346s 1.029330865s 1.033425761s 1.043434315s 1.053682891s 1.053799984s 1.054249257s 1.06096003s 1.063745032s 1.065208671s 1.069250432s 1.070912231s 1.071555732s 1.091190748s 1.09428229s 1.09603735s 1.09623353s 1.100056183s 1.130611633s 1.14757816s 1.170803676s 1.171224383s 1.17879489s 1.180105597s 1.181710303s 1.183058479s 1.184613673s 1.197000967s 1.208749689s 1.213768152s 1.215026542s 1.222626595s 1.225541381s 1.230951438s 1.241166538s 1.247895378s 1.251637815s 1.251806997s 1.251880526s 1.257386034s 1.26369713s 1.267741549s 1.27572066s 1.283418551s 1.284632289s 1.290002152s 1.292562552s 1.293420008s] Jun 26 00:15:11.983: INFO: 50 %ile: 913.053543ms Jun 26 00:15:11.983: INFO: 90 %ile: 1.208749689s Jun 26 00:15:11.983: INFO: 99 %ile: 1.292562552s Jun 26 00:15:11.983: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:15:11.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-9473" for this suite. • [SLOW TEST:17.183 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":294,"completed":120,"skipped":1803,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:15:12.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 26 00:15:12.606: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 26 00:15:14.618: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727312, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727312, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727312, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727312, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 26 00:15:17.659: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:15:17.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-58" for this suite. STEP: Destroying namespace "webhook-58-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.061 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":294,"completed":121,"skipped":1808,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:15:18.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:809 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-3356 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-3356 STEP: creating replication controller externalsvc in namespace services-3356 I0626 00:15:18.626357 8 runners.go:190] Created replication controller with name: externalsvc, namespace: services-3356, replica count: 2 I0626 00:15:21.676782 8 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0626 00:15:24.677038 8 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Jun 26 00:15:24.958: INFO: Creating new exec pod Jun 26 00:15:28.986: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3356 execpodqrsxc -- /bin/sh -x -c nslookup nodeport-service' Jun 26 00:15:29.482: INFO: stderr: "I0626 00:15:29.114554 1574 log.go:172] (0xc000a88000) (0xc00057b0e0) Create stream\nI0626 00:15:29.114629 1574 log.go:172] (0xc000a88000) (0xc00057b0e0) Stream added, broadcasting: 1\nI0626 00:15:29.118066 1574 log.go:172] (0xc000a88000) Reply frame received for 1\nI0626 00:15:29.118133 1574 log.go:172] (0xc000a88000) (0xc000536140) Create stream\nI0626 00:15:29.118169 1574 log.go:172] (0xc000a88000) (0xc000536140) Stream added, broadcasting: 3\nI0626 00:15:29.119207 1574 log.go:172] (0xc000a88000) Reply frame received for 3\nI0626 00:15:29.119264 1574 log.go:172] (0xc000a88000) (0xc00057b900) Create stream\nI0626 00:15:29.119285 1574 log.go:172] (0xc000a88000) (0xc00057b900) Stream added, broadcasting: 5\nI0626 00:15:29.120326 1574 log.go:172] (0xc000a88000) Reply frame received for 5\nI0626 00:15:29.225807 1574 log.go:172] (0xc000a88000) Data frame received for 5\nI0626 00:15:29.225828 1574 log.go:172] (0xc00057b900) (5) Data frame handling\nI0626 00:15:29.225839 1574 log.go:172] (0xc00057b900) (5) Data frame sent\n+ nslookup nodeport-service\nI0626 00:15:29.470752 1574 log.go:172] (0xc000a88000) Data frame received for 3\nI0626 00:15:29.470780 1574 log.go:172] (0xc000536140) (3) Data frame handling\nI0626 00:15:29.470796 1574 log.go:172] (0xc000536140) (3) Data frame sent\nI0626 00:15:29.471682 1574 log.go:172] (0xc000a88000) Data frame received for 3\nI0626 00:15:29.471704 1574 log.go:172] (0xc000536140) (3) Data frame handling\nI0626 00:15:29.471715 1574 log.go:172] (0xc000536140) (3) Data frame sent\nI0626 00:15:29.472145 1574 log.go:172] (0xc000a88000) Data frame received for 5\nI0626 00:15:29.472167 1574 log.go:172] (0xc00057b900) (5) Data frame handling\nI0626 00:15:29.472190 1574 log.go:172] (0xc000a88000) Data frame received for 3\nI0626 00:15:29.472204 1574 log.go:172] (0xc000536140) (3) Data frame handling\nI0626 00:15:29.474095 1574 log.go:172] (0xc000a88000) Data frame received for 1\nI0626 00:15:29.474121 1574 log.go:172] (0xc00057b0e0) (1) Data frame handling\nI0626 00:15:29.474136 1574 log.go:172] (0xc00057b0e0) (1) Data frame sent\nI0626 00:15:29.474161 1574 log.go:172] (0xc000a88000) (0xc00057b0e0) Stream removed, broadcasting: 1\nI0626 00:15:29.474175 1574 log.go:172] (0xc000a88000) Go away received\nI0626 00:15:29.474474 1574 log.go:172] (0xc000a88000) (0xc00057b0e0) Stream removed, broadcasting: 1\nI0626 00:15:29.474489 1574 log.go:172] (0xc000a88000) (0xc000536140) Stream removed, broadcasting: 3\nI0626 00:15:29.474495 1574 log.go:172] (0xc000a88000) (0xc00057b900) Stream removed, broadcasting: 5\n" Jun 26 00:15:29.482: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-3356.svc.cluster.local\tcanonical name = externalsvc.services-3356.svc.cluster.local.\nName:\texternalsvc.services-3356.svc.cluster.local\nAddress: 10.96.170.235\n\n" STEP: deleting ReplicationController externalsvc in namespace services-3356, will wait for the garbage collector to delete the pods Jun 26 00:15:29.548: INFO: Deleting ReplicationController externalsvc took: 5.571918ms Jun 26 00:15:29.648: INFO: Terminating ReplicationController externalsvc pods took: 100.232319ms Jun 26 00:15:45.371: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:15:45.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3356" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:813 • [SLOW TEST:27.336 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":294,"completed":122,"skipped":1829,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:15:45.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:15:49.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1804" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":294,"completed":123,"skipped":1851,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:15:49.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:16:49.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2673" for this suite. • [SLOW TEST:60.326 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":294,"completed":124,"skipped":1925,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:16:49.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:809 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-6583 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-6583 I0626 00:16:50.104279 8 runners.go:190] Created replication controller with name: externalname-service, namespace: services-6583, replica count: 2 I0626 00:16:53.154755 8 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0626 00:16:56.155006 8 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 26 00:16:56.155: INFO: Creating new exec pod Jun 26 00:17:01.191: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6583 execpodzh7cf -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jun 26 00:17:01.453: INFO: stderr: "I0626 00:17:01.331996 1594 log.go:172] (0xc000aa20b0) (0xc00069b2c0) Create stream\nI0626 00:17:01.332076 1594 log.go:172] (0xc000aa20b0) (0xc00069b2c0) Stream added, broadcasting: 1\nI0626 00:17:01.334076 1594 log.go:172] (0xc000aa20b0) Reply frame received for 1\nI0626 00:17:01.334119 1594 log.go:172] (0xc000aa20b0) (0xc0005ecb40) Create stream\nI0626 00:17:01.334136 1594 log.go:172] (0xc000aa20b0) (0xc0005ecb40) Stream added, broadcasting: 3\nI0626 00:17:01.335124 1594 log.go:172] (0xc000aa20b0) Reply frame received for 3\nI0626 00:17:01.335164 1594 log.go:172] (0xc000aa20b0) (0xc0005ed040) Create stream\nI0626 00:17:01.335178 1594 log.go:172] (0xc000aa20b0) (0xc0005ed040) Stream added, broadcasting: 5\nI0626 00:17:01.336151 1594 log.go:172] (0xc000aa20b0) Reply frame received for 5\nI0626 00:17:01.423208 1594 log.go:172] (0xc000aa20b0) Data frame received for 5\nI0626 00:17:01.423239 1594 log.go:172] (0xc0005ed040) (5) Data frame handling\nI0626 00:17:01.423261 1594 log.go:172] (0xc0005ed040) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0626 00:17:01.442671 1594 log.go:172] (0xc000aa20b0) Data frame received for 5\nI0626 00:17:01.442717 1594 log.go:172] (0xc0005ed040) (5) Data frame handling\nI0626 00:17:01.442818 1594 log.go:172] (0xc0005ed040) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0626 00:17:01.443061 1594 log.go:172] (0xc000aa20b0) Data frame received for 5\nI0626 00:17:01.443087 1594 log.go:172] (0xc0005ed040) (5) Data frame handling\nI0626 00:17:01.443384 1594 log.go:172] (0xc000aa20b0) Data frame received for 3\nI0626 00:17:01.443401 1594 log.go:172] (0xc0005ecb40) (3) Data frame handling\nI0626 00:17:01.445967 1594 log.go:172] (0xc000aa20b0) Data frame received for 1\nI0626 00:17:01.445996 1594 log.go:172] (0xc00069b2c0) (1) Data frame handling\nI0626 00:17:01.446013 1594 log.go:172] (0xc00069b2c0) (1) Data frame sent\nI0626 00:17:01.446347 1594 log.go:172] (0xc000aa20b0) (0xc00069b2c0) Stream removed, broadcasting: 1\nI0626 00:17:01.446396 1594 log.go:172] (0xc000aa20b0) Go away received\nI0626 00:17:01.446852 1594 log.go:172] (0xc000aa20b0) (0xc00069b2c0) Stream removed, broadcasting: 1\nI0626 00:17:01.446880 1594 log.go:172] (0xc000aa20b0) (0xc0005ecb40) Stream removed, broadcasting: 3\nI0626 00:17:01.446893 1594 log.go:172] (0xc000aa20b0) (0xc0005ed040) Stream removed, broadcasting: 5\n" Jun 26 00:17:01.453: INFO: stdout: "" Jun 26 00:17:01.454: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6583 execpodzh7cf -- /bin/sh -x -c nc -zv -t -w 2 10.102.93.216 80' Jun 26 00:17:01.665: INFO: stderr: "I0626 00:17:01.589776 1615 log.go:172] (0xc000a81080) (0xc000b5e0a0) Create stream\nI0626 00:17:01.589827 1615 log.go:172] (0xc000a81080) (0xc000b5e0a0) Stream added, broadcasting: 1\nI0626 00:17:01.594799 1615 log.go:172] (0xc000a81080) Reply frame received for 1\nI0626 00:17:01.594868 1615 log.go:172] (0xc000a81080) (0xc0006e7e00) Create stream\nI0626 00:17:01.594885 1615 log.go:172] (0xc000a81080) (0xc0006e7e00) Stream added, broadcasting: 3\nI0626 00:17:01.595994 1615 log.go:172] (0xc000a81080) Reply frame received for 3\nI0626 00:17:01.596058 1615 log.go:172] (0xc000a81080) (0xc000424b40) Create stream\nI0626 00:17:01.596074 1615 log.go:172] (0xc000a81080) (0xc000424b40) Stream added, broadcasting: 5\nI0626 00:17:01.597488 1615 log.go:172] (0xc000a81080) Reply frame received for 5\nI0626 00:17:01.656662 1615 log.go:172] (0xc000a81080) Data frame received for 3\nI0626 00:17:01.656689 1615 log.go:172] (0xc0006e7e00) (3) Data frame handling\nI0626 00:17:01.656728 1615 log.go:172] (0xc000a81080) Data frame received for 5\nI0626 00:17:01.656765 1615 log.go:172] (0xc000424b40) (5) Data frame handling\nI0626 00:17:01.656797 1615 log.go:172] (0xc000424b40) (5) Data frame sent\nI0626 00:17:01.656818 1615 log.go:172] (0xc000a81080) Data frame received for 5\nI0626 00:17:01.656828 1615 log.go:172] (0xc000424b40) (5) Data frame handling\n+ nc -zv -t -w 2 10.102.93.216 80\nConnection to 10.102.93.216 80 port [tcp/http] succeeded!\nI0626 00:17:01.658681 1615 log.go:172] (0xc000a81080) Data frame received for 1\nI0626 00:17:01.658716 1615 log.go:172] (0xc000b5e0a0) (1) Data frame handling\nI0626 00:17:01.658748 1615 log.go:172] (0xc000b5e0a0) (1) Data frame sent\nI0626 00:17:01.658781 1615 log.go:172] (0xc000a81080) (0xc000b5e0a0) Stream removed, broadcasting: 1\nI0626 00:17:01.658841 1615 log.go:172] (0xc000a81080) Go away received\nI0626 00:17:01.659174 1615 log.go:172] (0xc000a81080) (0xc000b5e0a0) Stream removed, broadcasting: 1\nI0626 00:17:01.659201 1615 log.go:172] (0xc000a81080) (0xc0006e7e00) Stream removed, broadcasting: 3\nI0626 00:17:01.659213 1615 log.go:172] (0xc000a81080) (0xc000424b40) Stream removed, broadcasting: 5\n" Jun 26 00:17:01.665: INFO: stdout: "" Jun 26 00:17:01.665: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6583 execpodzh7cf -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31583' Jun 26 00:17:01.881: INFO: stderr: "I0626 00:17:01.796292 1636 log.go:172] (0xc000a1f1e0) (0xc000b7e6e0) Create stream\nI0626 00:17:01.796345 1636 log.go:172] (0xc000a1f1e0) (0xc000b7e6e0) Stream added, broadcasting: 1\nI0626 00:17:01.801701 1636 log.go:172] (0xc000a1f1e0) Reply frame received for 1\nI0626 00:17:01.801740 1636 log.go:172] (0xc000a1f1e0) (0xc000842dc0) Create stream\nI0626 00:17:01.801751 1636 log.go:172] (0xc000a1f1e0) (0xc000842dc0) Stream added, broadcasting: 3\nI0626 00:17:01.802666 1636 log.go:172] (0xc000a1f1e0) Reply frame received for 3\nI0626 00:17:01.802718 1636 log.go:172] (0xc000a1f1e0) (0xc000626aa0) Create stream\nI0626 00:17:01.802732 1636 log.go:172] (0xc000a1f1e0) (0xc000626aa0) Stream added, broadcasting: 5\nI0626 00:17:01.803688 1636 log.go:172] (0xc000a1f1e0) Reply frame received for 5\nI0626 00:17:01.874012 1636 log.go:172] (0xc000a1f1e0) Data frame received for 5\nI0626 00:17:01.874055 1636 log.go:172] (0xc000626aa0) (5) Data frame handling\nI0626 00:17:01.874070 1636 log.go:172] (0xc000626aa0) (5) Data frame sent\nI0626 00:17:01.874081 1636 log.go:172] (0xc000a1f1e0) Data frame received for 5\nI0626 00:17:01.874091 1636 log.go:172] (0xc000626aa0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31583\nConnection to 172.17.0.13 31583 port [tcp/31583] succeeded!\nI0626 00:17:01.874115 1636 log.go:172] (0xc000a1f1e0) Data frame received for 3\nI0626 00:17:01.874126 1636 log.go:172] (0xc000842dc0) (3) Data frame handling\nI0626 00:17:01.875453 1636 log.go:172] (0xc000a1f1e0) Data frame received for 1\nI0626 00:17:01.875479 1636 log.go:172] (0xc000b7e6e0) (1) Data frame handling\nI0626 00:17:01.875500 1636 log.go:172] (0xc000b7e6e0) (1) Data frame sent\nI0626 00:17:01.875521 1636 log.go:172] (0xc000a1f1e0) (0xc000b7e6e0) Stream removed, broadcasting: 1\nI0626 00:17:01.875551 1636 log.go:172] (0xc000a1f1e0) Go away received\nI0626 00:17:01.875933 1636 log.go:172] (0xc000a1f1e0) (0xc000b7e6e0) Stream removed, broadcasting: 1\nI0626 00:17:01.875957 1636 log.go:172] (0xc000a1f1e0) (0xc000842dc0) Stream removed, broadcasting: 3\nI0626 00:17:01.875968 1636 log.go:172] (0xc000a1f1e0) (0xc000626aa0) Stream removed, broadcasting: 5\n" Jun 26 00:17:01.881: INFO: stdout: "" Jun 26 00:17:01.881: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6583 execpodzh7cf -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31583' Jun 26 00:17:02.120: INFO: stderr: "I0626 00:17:02.014296 1657 log.go:172] (0xc0009fd4a0) (0xc000bfa460) Create stream\nI0626 00:17:02.014396 1657 log.go:172] (0xc0009fd4a0) (0xc000bfa460) Stream added, broadcasting: 1\nI0626 00:17:02.020030 1657 log.go:172] (0xc0009fd4a0) Reply frame received for 1\nI0626 00:17:02.020072 1657 log.go:172] (0xc0009fd4a0) (0xc0003ea8c0) Create stream\nI0626 00:17:02.020081 1657 log.go:172] (0xc0009fd4a0) (0xc0003ea8c0) Stream added, broadcasting: 3\nI0626 00:17:02.020977 1657 log.go:172] (0xc0009fd4a0) Reply frame received for 3\nI0626 00:17:02.021008 1657 log.go:172] (0xc0009fd4a0) (0xc00032a000) Create stream\nI0626 00:17:02.021017 1657 log.go:172] (0xc0009fd4a0) (0xc00032a000) Stream added, broadcasting: 5\nI0626 00:17:02.021947 1657 log.go:172] (0xc0009fd4a0) Reply frame received for 5\nI0626 00:17:02.111029 1657 log.go:172] (0xc0009fd4a0) Data frame received for 5\nI0626 00:17:02.111089 1657 log.go:172] (0xc00032a000) (5) Data frame handling\nI0626 00:17:02.111113 1657 log.go:172] (0xc00032a000) (5) Data frame sent\nI0626 00:17:02.111126 1657 log.go:172] (0xc0009fd4a0) Data frame received for 5\nI0626 00:17:02.111136 1657 log.go:172] (0xc00032a000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 31583\nConnection to 172.17.0.12 31583 port [tcp/31583] succeeded!\nI0626 00:17:02.111161 1657 log.go:172] (0xc0009fd4a0) Data frame received for 3\nI0626 00:17:02.111248 1657 log.go:172] (0xc0003ea8c0) (3) Data frame handling\nI0626 00:17:02.112602 1657 log.go:172] (0xc0009fd4a0) Data frame received for 1\nI0626 00:17:02.112622 1657 log.go:172] (0xc000bfa460) (1) Data frame handling\nI0626 00:17:02.112645 1657 log.go:172] (0xc000bfa460) (1) Data frame sent\nI0626 00:17:02.112658 1657 log.go:172] (0xc0009fd4a0) (0xc000bfa460) Stream removed, broadcasting: 1\nI0626 00:17:02.112697 1657 log.go:172] (0xc0009fd4a0) Go away received\nI0626 00:17:02.112934 1657 log.go:172] (0xc0009fd4a0) (0xc000bfa460) Stream removed, broadcasting: 1\nI0626 00:17:02.112947 1657 log.go:172] (0xc0009fd4a0) (0xc0003ea8c0) Stream removed, broadcasting: 3\nI0626 00:17:02.112954 1657 log.go:172] (0xc0009fd4a0) (0xc00032a000) Stream removed, broadcasting: 5\n" Jun 26 00:17:02.120: INFO: stdout: "" Jun 26 00:17:02.120: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:17:02.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6583" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:813 • [SLOW TEST:12.323 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":294,"completed":125,"skipped":1927,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:17:02.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Jun 26 00:17:02.237: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Jun 26 00:17:02.246: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Jun 26 00:17:02.246: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Jun 26 00:17:02.266: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Jun 26 00:17:02.266: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Jun 26 00:17:02.310: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Jun 26 00:17:02.310: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Jun 26 00:17:09.578: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:17:09.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-331" for this suite. • [SLOW TEST:7.523 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":294,"completed":126,"skipped":1935,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:17:09.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 26 00:17:10.790: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 26 00:17:12.841: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727430, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727430, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727430, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727430, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 26 00:17:14.926: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727430, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727430, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727430, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727430, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 26 00:17:17.995: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:17:18.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4321" for this suite. STEP: Destroying namespace "webhook-4321-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.682 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":294,"completed":127,"skipped":1959,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:17:18.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 26 00:17:19.629: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 26 00:17:21.640: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727439, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727439, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727439, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728727439, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 26 00:17:24.671: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:17:25.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-250" for this suite. STEP: Destroying namespace "webhook-250-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.160 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":294,"completed":128,"skipped":1970,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:17:25.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-2f10ecf1-6850-41b4-b1f6-54da67742730 STEP: Creating a pod to test consume secrets Jun 26 00:17:25.651: INFO: Waiting up to 5m0s for pod "pod-secrets-b6346e07-e213-4e7b-86eb-2bd72286ea2d" in namespace "secrets-3080" to be "Succeeded or Failed" Jun 26 00:17:25.679: INFO: Pod "pod-secrets-b6346e07-e213-4e7b-86eb-2bd72286ea2d": Phase="Pending", Reason="", readiness=false. Elapsed: 28.663174ms Jun 26 00:17:27.683: INFO: Pod "pod-secrets-b6346e07-e213-4e7b-86eb-2bd72286ea2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032540773s Jun 26 00:17:29.688: INFO: Pod "pod-secrets-b6346e07-e213-4e7b-86eb-2bd72286ea2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037025497s STEP: Saw pod success Jun 26 00:17:29.688: INFO: Pod "pod-secrets-b6346e07-e213-4e7b-86eb-2bd72286ea2d" satisfied condition "Succeeded or Failed" Jun 26 00:17:29.691: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-b6346e07-e213-4e7b-86eb-2bd72286ea2d container secret-volume-test: STEP: delete the pod Jun 26 00:17:29.739: INFO: Waiting for pod pod-secrets-b6346e07-e213-4e7b-86eb-2bd72286ea2d to disappear Jun 26 00:17:29.751: INFO: Pod pod-secrets-b6346e07-e213-4e7b-86eb-2bd72286ea2d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:17:29.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3080" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":294,"completed":129,"skipped":1976,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:17:29.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 26 00:17:29.834: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 26 00:17:29.877: INFO: Waiting for terminating namespaces to be deleted... Jun 26 00:17:29.880: INFO: Logging pods the apiserver thinks is on node latest-worker before test Jun 26 00:17:29.885: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) Jun 26 00:17:29.885: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 Jun 26 00:17:29.885: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) Jun 26 00:17:29.885: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 Jun 26 00:17:29.885: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jun 26 00:17:29.885: INFO: Container kindnet-cni ready: true, restart count 5 Jun 26 00:17:29.885: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jun 26 00:17:29.885: INFO: Container kube-proxy ready: true, restart count 0 Jun 26 00:17:29.885: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Jun 26 00:17:29.892: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) Jun 26 00:17:29.892: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 Jun 26 00:17:29.892: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) Jun 26 00:17:29.892: INFO: Container terminate-cmd-rpa ready: true, restart count 2 Jun 26 00:17:29.892: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jun 26 00:17:29.892: INFO: Container kindnet-cni ready: true, restart count 5 Jun 26 00:17:29.892: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jun 26 00:17:29.892: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-b3fa97e4-1fd3-4328-8c7d-903e06188601 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-b3fa97e4-1fd3-4328-8c7d-903e06188601 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-b3fa97e4-1fd3-4328-8c7d-903e06188601 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:22:38.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3936" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.364 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":294,"completed":130,"skipped":2003,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:22:38.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jun 26 00:22:38.253: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4671 /api/v1/namespaces/watch-4671/configmaps/e2e-watch-test-label-changed bb4f683f-680d-40bf-9a81-4ca98b92e24e 15916663 0 2020-06-26 00:22:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-06-26 00:22:38 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jun 26 00:22:38.253: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4671 /api/v1/namespaces/watch-4671/configmaps/e2e-watch-test-label-changed bb4f683f-680d-40bf-9a81-4ca98b92e24e 15916664 0 2020-06-26 00:22:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-06-26 00:22:38 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 26 00:22:38.253: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4671 /api/v1/namespaces/watch-4671/configmaps/e2e-watch-test-label-changed bb4f683f-680d-40bf-9a81-4ca98b92e24e 15916665 0 2020-06-26 00:22:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-06-26 00:22:38 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jun 26 00:22:48.342: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4671 /api/v1/namespaces/watch-4671/configmaps/e2e-watch-test-label-changed bb4f683f-680d-40bf-9a81-4ca98b92e24e 15916714 0 2020-06-26 00:22:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-06-26 00:22:48 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 26 00:22:48.343: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4671 /api/v1/namespaces/watch-4671/configmaps/e2e-watch-test-label-changed bb4f683f-680d-40bf-9a81-4ca98b92e24e 15916715 0 2020-06-26 00:22:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-06-26 00:22:48 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 26 00:22:48.343: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4671 /api/v1/namespaces/watch-4671/configmaps/e2e-watch-test-label-changed bb4f683f-680d-40bf-9a81-4ca98b92e24e 15916716 0 2020-06-26 00:22:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-06-26 00:22:48 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:22:48.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4671" for this suite. • [SLOW TEST:10.225 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":294,"completed":131,"skipped":2022,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:22:48.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod Jun 26 00:22:52.416: INFO: Pod pod-hostip-035ddeed-0cab-46b7-927b-5786788536da has hostIP: 172.17.0.12 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:22:52.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9598" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":294,"completed":132,"skipped":2025,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:22:52.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-1da97d14-0348-4dc6-b9fd-83f971ad3684 in namespace container-probe-3402 Jun 26 00:22:56.499: INFO: Started pod liveness-1da97d14-0348-4dc6-b9fd-83f971ad3684 in namespace container-probe-3402 STEP: checking the pod's current state and verifying that restartCount is present Jun 26 00:22:56.514: INFO: Initial restart count of pod liveness-1da97d14-0348-4dc6-b9fd-83f971ad3684 is 0 Jun 26 00:23:08.587: INFO: Restart count of pod container-probe-3402/liveness-1da97d14-0348-4dc6-b9fd-83f971ad3684 is now 1 (12.073222234s elapsed) Jun 26 00:23:28.635: INFO: Restart count of pod container-probe-3402/liveness-1da97d14-0348-4dc6-b9fd-83f971ad3684 is now 2 (32.120529606s elapsed) Jun 26 00:23:48.676: INFO: Restart count of pod container-probe-3402/liveness-1da97d14-0348-4dc6-b9fd-83f971ad3684 is now 3 (52.161864234s elapsed) Jun 26 00:24:08.721: INFO: Restart count of pod container-probe-3402/liveness-1da97d14-0348-4dc6-b9fd-83f971ad3684 is now 4 (1m12.206598165s elapsed) Jun 26 00:25:16.889: INFO: Restart count of pod container-probe-3402/liveness-1da97d14-0348-4dc6-b9fd-83f971ad3684 is now 5 (2m20.374963482s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:25:16.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3402" for this suite. • [SLOW TEST:144.500 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":294,"completed":133,"skipped":2025,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:25:16.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 26 00:25:17.004: INFO: Waiting up to 5m0s for pod "pod-024d00bd-604a-4daa-82ae-72fb8856f7ea" in namespace "emptydir-5046" to be "Succeeded or Failed" Jun 26 00:25:17.027: INFO: Pod "pod-024d00bd-604a-4daa-82ae-72fb8856f7ea": Phase="Pending", Reason="", readiness=false. Elapsed: 22.584752ms Jun 26 00:25:19.067: INFO: Pod "pod-024d00bd-604a-4daa-82ae-72fb8856f7ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062652364s Jun 26 00:25:21.071: INFO: Pod "pod-024d00bd-604a-4daa-82ae-72fb8856f7ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066693968s STEP: Saw pod success Jun 26 00:25:21.071: INFO: Pod "pod-024d00bd-604a-4daa-82ae-72fb8856f7ea" satisfied condition "Succeeded or Failed" Jun 26 00:25:21.074: INFO: Trying to get logs from node latest-worker pod pod-024d00bd-604a-4daa-82ae-72fb8856f7ea container test-container: STEP: delete the pod Jun 26 00:25:21.158: INFO: Waiting for pod pod-024d00bd-604a-4daa-82ae-72fb8856f7ea to disappear Jun 26 00:25:21.170: INFO: Pod pod-024d00bd-604a-4daa-82ae-72fb8856f7ea no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:25:21.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5046" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":134,"skipped":2026,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:25:21.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:809 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-4700 Jun 26 00:25:25.260: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4700 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jun 26 00:25:28.513: INFO: stderr: "I0626 00:25:28.297336 1677 log.go:172] (0xc0000e9ad0) (0xc0006e17c0) Create stream\nI0626 00:25:28.297369 1677 log.go:172] (0xc0000e9ad0) (0xc0006e17c0) Stream added, broadcasting: 1\nI0626 00:25:28.299594 1677 log.go:172] (0xc0000e9ad0) Reply frame received for 1\nI0626 00:25:28.299636 1677 log.go:172] (0xc0000e9ad0) (0xc00069c000) Create stream\nI0626 00:25:28.299647 1677 log.go:172] (0xc0000e9ad0) (0xc00069c000) Stream added, broadcasting: 3\nI0626 00:25:28.300562 1677 log.go:172] (0xc0000e9ad0) Reply frame received for 3\nI0626 00:25:28.300593 1677 log.go:172] (0xc0000e9ad0) (0xc00067c000) Create stream\nI0626 00:25:28.300602 1677 log.go:172] (0xc0000e9ad0) (0xc00067c000) Stream added, broadcasting: 5\nI0626 00:25:28.301729 1677 log.go:172] (0xc0000e9ad0) Reply frame received for 5\nI0626 00:25:28.422231 1677 log.go:172] (0xc0000e9ad0) Data frame received for 5\nI0626 00:25:28.422255 1677 log.go:172] (0xc00067c000) (5) Data frame handling\nI0626 00:25:28.422268 1677 log.go:172] (0xc00067c000) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0626 00:25:28.498863 1677 log.go:172] (0xc0000e9ad0) Data frame received for 3\nI0626 00:25:28.498897 1677 log.go:172] (0xc00069c000) (3) Data frame handling\nI0626 00:25:28.498909 1677 log.go:172] (0xc00069c000) (3) Data frame sent\nI0626 00:25:28.499589 1677 log.go:172] (0xc0000e9ad0) Data frame received for 5\nI0626 00:25:28.499627 1677 log.go:172] (0xc00067c000) (5) Data frame handling\nI0626 00:25:28.499664 1677 log.go:172] (0xc0000e9ad0) Data frame received for 3\nI0626 00:25:28.499681 1677 log.go:172] (0xc00069c000) (3) Data frame handling\nI0626 00:25:28.501838 1677 log.go:172] (0xc0000e9ad0) Data frame received for 1\nI0626 00:25:28.501888 1677 log.go:172] (0xc0006e17c0) (1) Data frame handling\nI0626 00:25:28.501923 1677 log.go:172] (0xc0006e17c0) (1) Data frame sent\nI0626 00:25:28.501945 1677 log.go:172] (0xc0000e9ad0) (0xc0006e17c0) Stream removed, broadcasting: 1\nI0626 00:25:28.501978 1677 log.go:172] (0xc0000e9ad0) Go away received\nI0626 00:25:28.502557 1677 log.go:172] (0xc0000e9ad0) (0xc0006e17c0) Stream removed, broadcasting: 1\nI0626 00:25:28.502584 1677 log.go:172] (0xc0000e9ad0) (0xc00069c000) Stream removed, broadcasting: 3\nI0626 00:25:28.502598 1677 log.go:172] (0xc0000e9ad0) (0xc00067c000) Stream removed, broadcasting: 5\n" Jun 26 00:25:28.513: INFO: stdout: "iptables" Jun 26 00:25:28.513: INFO: proxyMode: iptables Jun 26 00:25:28.517: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 26 00:25:28.529: INFO: Pod kube-proxy-mode-detector still exists Jun 26 00:25:30.529: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 26 00:25:30.534: INFO: Pod kube-proxy-mode-detector still exists Jun 26 00:25:32.529: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 26 00:25:32.534: INFO: Pod kube-proxy-mode-detector still exists Jun 26 00:25:34.529: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 26 00:25:34.534: INFO: Pod kube-proxy-mode-detector still exists Jun 26 00:25:36.529: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 26 00:25:36.534: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-4700 STEP: creating replication controller affinity-clusterip-timeout in namespace services-4700 I0626 00:25:36.582512 8 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-4700, replica count: 3 I0626 00:25:39.632927 8 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0626 00:25:42.633265 8 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0626 00:25:45.633481 8 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 26 00:25:45.639: INFO: Creating new exec pod Jun 26 00:25:50.653: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4700 execpod-affinitym4crt -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jun 26 00:25:50.938: INFO: stderr: "I0626 00:25:50.788323 1711 log.go:172] (0xc000926a50) (0xc0006cd040) Create stream\nI0626 00:25:50.788397 1711 log.go:172] (0xc000926a50) (0xc0006cd040) Stream added, broadcasting: 1\nI0626 00:25:50.792033 1711 log.go:172] (0xc000926a50) Reply frame received for 1\nI0626 00:25:50.792065 1711 log.go:172] (0xc000926a50) (0xc0004ac960) Create stream\nI0626 00:25:50.792074 1711 log.go:172] (0xc000926a50) (0xc0004ac960) Stream added, broadcasting: 3\nI0626 00:25:50.792753 1711 log.go:172] (0xc000926a50) Reply frame received for 3\nI0626 00:25:50.792781 1711 log.go:172] (0xc000926a50) (0xc000424a00) Create stream\nI0626 00:25:50.792790 1711 log.go:172] (0xc000926a50) (0xc000424a00) Stream added, broadcasting: 5\nI0626 00:25:50.793788 1711 log.go:172] (0xc000926a50) Reply frame received for 5\nI0626 00:25:50.918160 1711 log.go:172] (0xc000926a50) Data frame received for 5\nI0626 00:25:50.918195 1711 log.go:172] (0xc000424a00) (5) Data frame handling\nI0626 00:25:50.918218 1711 log.go:172] (0xc000424a00) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nI0626 00:25:50.930056 1711 log.go:172] (0xc000926a50) Data frame received for 5\nI0626 00:25:50.930083 1711 log.go:172] (0xc000424a00) (5) Data frame handling\nI0626 00:25:50.930092 1711 log.go:172] (0xc000424a00) (5) Data frame sent\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0626 00:25:50.930682 1711 log.go:172] (0xc000926a50) Data frame received for 3\nI0626 00:25:50.930706 1711 log.go:172] (0xc0004ac960) (3) Data frame handling\nI0626 00:25:50.930801 1711 log.go:172] (0xc000926a50) Data frame received for 5\nI0626 00:25:50.930839 1711 log.go:172] (0xc000424a00) (5) Data frame handling\nI0626 00:25:50.932828 1711 log.go:172] (0xc000926a50) Data frame received for 1\nI0626 00:25:50.932868 1711 log.go:172] (0xc0006cd040) (1) Data frame handling\nI0626 00:25:50.932905 1711 log.go:172] (0xc0006cd040) (1) Data frame sent\nI0626 00:25:50.932936 1711 log.go:172] (0xc000926a50) (0xc0006cd040) Stream removed, broadcasting: 1\nI0626 00:25:50.932970 1711 log.go:172] (0xc000926a50) Go away received\nI0626 00:25:50.933627 1711 log.go:172] (0xc000926a50) (0xc0006cd040) Stream removed, broadcasting: 1\nI0626 00:25:50.933676 1711 log.go:172] (0xc000926a50) (0xc0004ac960) Stream removed, broadcasting: 3\nI0626 00:25:50.933712 1711 log.go:172] (0xc000926a50) (0xc000424a00) Stream removed, broadcasting: 5\n" Jun 26 00:25:50.939: INFO: stdout: "" Jun 26 00:25:50.939: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4700 execpod-affinitym4crt -- /bin/sh -x -c nc -zv -t -w 2 10.106.178.246 80' Jun 26 00:25:51.174: INFO: stderr: "I0626 00:25:51.094244 1732 log.go:172] (0xc000a276b0) (0xc000bbc5a0) Create stream\nI0626 00:25:51.094301 1732 log.go:172] (0xc000a276b0) (0xc000bbc5a0) Stream added, broadcasting: 1\nI0626 00:25:51.099010 1732 log.go:172] (0xc000a276b0) Reply frame received for 1\nI0626 00:25:51.099054 1732 log.go:172] (0xc000a276b0) (0xc0000dd040) Create stream\nI0626 00:25:51.099068 1732 log.go:172] (0xc000a276b0) (0xc0000dd040) Stream added, broadcasting: 3\nI0626 00:25:51.100232 1732 log.go:172] (0xc000a276b0) Reply frame received for 3\nI0626 00:25:51.100270 1732 log.go:172] (0xc000a276b0) (0xc000716be0) Create stream\nI0626 00:25:51.100281 1732 log.go:172] (0xc000a276b0) (0xc000716be0) Stream added, broadcasting: 5\nI0626 00:25:51.101392 1732 log.go:172] (0xc000a276b0) Reply frame received for 5\nI0626 00:25:51.163918 1732 log.go:172] (0xc000a276b0) Data frame received for 5\nI0626 00:25:51.163964 1732 log.go:172] (0xc000716be0) (5) Data frame handling\nI0626 00:25:51.163974 1732 log.go:172] (0xc000716be0) (5) Data frame sent\nI0626 00:25:51.163979 1732 log.go:172] (0xc000a276b0) Data frame received for 5\nI0626 00:25:51.163984 1732 log.go:172] (0xc000716be0) (5) Data frame handling\n+ nc -zv -t -w 2 10.106.178.246 80\nConnection to 10.106.178.246 80 port [tcp/http] succeeded!\nI0626 00:25:51.164007 1732 log.go:172] (0xc000a276b0) Data frame received for 3\nI0626 00:25:51.164014 1732 log.go:172] (0xc0000dd040) (3) Data frame handling\nI0626 00:25:51.165757 1732 log.go:172] (0xc000a276b0) Data frame received for 1\nI0626 00:25:51.165788 1732 log.go:172] (0xc000bbc5a0) (1) Data frame handling\nI0626 00:25:51.165805 1732 log.go:172] (0xc000bbc5a0) (1) Data frame sent\nI0626 00:25:51.165818 1732 log.go:172] (0xc000a276b0) (0xc000bbc5a0) Stream removed, broadcasting: 1\nI0626 00:25:51.165895 1732 log.go:172] (0xc000a276b0) Go away received\nI0626 00:25:51.166165 1732 log.go:172] (0xc000a276b0) (0xc000bbc5a0) Stream removed, broadcasting: 1\nI0626 00:25:51.166183 1732 log.go:172] (0xc000a276b0) (0xc0000dd040) Stream removed, broadcasting: 3\nI0626 00:25:51.166194 1732 log.go:172] (0xc000a276b0) (0xc000716be0) Stream removed, broadcasting: 5\n" Jun 26 00:25:51.174: INFO: stdout: "" Jun 26 00:25:51.174: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4700 execpod-affinitym4crt -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.106.178.246:80/ ; done' Jun 26 00:25:51.508: INFO: stderr: "I0626 00:25:51.309594 1753 log.go:172] (0xc0007971e0) (0xc0000df9a0) Create stream\nI0626 00:25:51.309651 1753 log.go:172] (0xc0007971e0) (0xc0000df9a0) Stream added, broadcasting: 1\nI0626 00:25:51.312068 1753 log.go:172] (0xc0007971e0) Reply frame received for 1\nI0626 00:25:51.312117 1753 log.go:172] (0xc0007971e0) (0xc0007685a0) Create stream\nI0626 00:25:51.312133 1753 log.go:172] (0xc0007971e0) (0xc0007685a0) Stream added, broadcasting: 3\nI0626 00:25:51.313414 1753 log.go:172] (0xc0007971e0) Reply frame received for 3\nI0626 00:25:51.313453 1753 log.go:172] (0xc0007971e0) (0xc000682500) Create stream\nI0626 00:25:51.313474 1753 log.go:172] (0xc0007971e0) (0xc000682500) Stream added, broadcasting: 5\nI0626 00:25:51.314376 1753 log.go:172] (0xc0007971e0) Reply frame received for 5\nI0626 00:25:51.377874 1753 log.go:172] (0xc0007971e0) Data frame received for 5\nI0626 00:25:51.377917 1753 log.go:172] (0xc000682500) (5) Data frame handling\nI0626 00:25:51.377931 1753 log.go:172] (0xc000682500) (5) Data frame sent\nI0626 00:25:51.377939 1753 log.go:172] (0xc0007971e0) Data frame received for 5\nI0626 00:25:51.377946 1753 log.go:172] (0xc000682500) (5) Data frame handling\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.178.246:80/\nI0626 00:25:51.377963 1753 log.go:172] (0xc000682500) (5) Data frame sent\nI0626 00:25:51.377972 1753 log.go:172] (0xc0007971e0) Data frame received for 3\nI0626 00:25:51.377984 1753 log.go:172] (0xc0007685a0) (3) Data frame handling\nI0626 00:25:51.377999 1753 log.go:172] (0xc0007685a0) (3) Data frame sent\nI0626 00:25:51.419945 1753 log.go:172] (0xc0007971e0) Data frame received for 3\nI0626 00:25:51.419979 1753 log.go:172] (0xc0007685a0) (3) Data frame handling\nI0626 00:25:51.420004 1753 log.go:172] (0xc0007685a0) (3) Data frame sent\nI0626 00:25:51.420799 1753 log.go:172] (0xc0007971e0) Data frame received for 3\nI0626 00:25:51.420857 1753 log.go:172] (0xc0007685a0) (3) Data frame handling\nI0626 00:25:51.420881 1753 log.go:172] (0xc0007685a0) (3) Data frame sent\nI0626 00:25:51.420925 1753 log.go:172] (0xc0007971e0) Data frame received for 5\nI0626 00:25:51.420954 1753 log.go:172] (0xc000682500) (5) Data frame handling\nI0626 00:25:51.420978 1753 log.go:172] (0xc000682500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.178.246:80/\nI0626 00:25:51.424500 1753 log.go:172] (0xc0007971e0) Data frame received for 3\nI0626 00:25:51.424518 1753 log.go:172] (0xc0007685a0) (3) Data frame handling\nI0626 00:25:51.424529 1753 log.go:172] (0xc0007685a0) (3) Data frame sent\nI0626 00:25:51.425030 1753 log.go:172] (0xc0007971e0) Data frame received for 5\nI0626 00:25:51.425051 1753 log.go:172] (0xc000682500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.178.246:80/\nI0626 00:25:51.425066 1753 log.go:172] (0xc0007971e0) Data frame received for 3\nI0626 00:25:51.425083 1753 log.go:172] (0xc0007685a0) (3) Data frame handling\nI0626 00:25:51.425093 1753 log.go:172] (0xc0007685a0) (3) Data frame sent\nI0626 00:25:51.425454 1753 log.go:172] (0xc000682500) (5) Data frame sent\nI0626 00:25:51.429521 1753 log.go:172] (0xc0007971e0) Data frame received for 3\nI0626 00:25:51.429539 1753 log.go:172] (0xc0007685a0) (3) Data frame handling\nI0626 00:25:51.429548 1753 log.go:172] (0xc0007685a0) (3) Data frame sent\nI0626 00:25:51.430001 1753 log.go:172] (0xc0007971e0) Data frame received for 5\nI0626 00:25:51.430019 1753 log.go:172] (0xc000682500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.178.246:80/\nI0626 00:25:51.430036 1753 log.go:172] (0xc0007971e0) Data frame received for 3\nI0626 00:25:51.430063 1753 log.go:172] (0xc0007685a0) (3) Data frame handling\nI0626 00:25:51.430078 1753 log.go:172] (0xc0007685a0) (3) Data frame sent\nI0626 00:25:51.430096 1753 log.go:172] (0xc000682500) (5) Data frame sent\nI0626 00:25:51.434209 1753 log.go:172] (0xc0007971e0) Data frame received for 3\nI0626 00:25:51.434231 1753 log.go:172] (0xc0007685a0) (3) Data frame handling\nI0626 00:25:51.434244 1753 log.go:172] (0xc0007685a0) (3) Data frame sent\nI0626 00:25:51.434554 1753 log.go:172] (0xc0007971e0) Data frame received for 5\nI0626 00:25:51.434577 1753 log.go:172] (0xc000682500) (5) Data frame handling\nI0626 00:25:51.434602 1753 log.go:172] (0xc000682500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.178.246:80/\nI0626 00:25:51.434906 1753 log.go:172] (0xc0007971e0) Data frame received for 3\nI0626 00:25:51.434929 1753 log.go:172] (0xc0007685a0) (3) Data frame handling\nI0626 00:25:51.434942 1753 log.go:172] (0xc0007685a0) (3) Data frame sent\nI0626 00:25:51.438654 1753 log.go:172] (0xc0007971e0) Data frame received for 3\nI0626 00:25:51.438678 1753 log.go:172] (0xc0007685a0) (3) Data frame handling\nI0626 00:25:51.438784 1753 log.go:172] (0xc0007685a0) (3) Data frame sent\nI0626 00:25:51.438950 1753 log.go:172] (0xc0007971e0) Data frame received for 5\nI0626 00:25:51.438972 1753 log.go:172] (0xc000682500) (5) Data frame handling\nI0626 00:25:51.438985 1753 log.go:172] (0xc000682500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.178.246:80/\nI0626 00:25:51.439026 1753 log.go:172] (0xc0007971e0) Data frame received for 3\nI0626 00:25:51.439060 1753 log.go:172] (0xc0007685a0) (3) Data frame handling\nI0626 00:25:51.439089 1753 log.go:172] (0xc0007685a0) (3) Data frame sent\nI0626 00:25:51.442948 1753 log.go:172] (0xc0007971e0) Data frame received for 3\nI0626 00:25:51.442962 1753 log.go:172] (0xc0007685a0) (3) Data frame handling\nI0626 00:25:51.442969 1753 log.go:172] (0xc0007685a0) (3) Data frame sent\nI0626 00:25:51.443433 1753 log.go:172] (0xc0007971e0) Data frame received for 5\nI0626 00:25:51.443452 1753 log.go:172] (0xc000682500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.178.246:80/\nI0626 00:25:51.443470 1753 log.go:172] (0xc0007971e0) Data frame received for 3\nI0626 00:25:51.443499 1753 log.go:172] (0xc0007685a0) (3) Data frame handling\nI0626 00:25:51.443524 1753 log.go:172] (0xc0007685a0) (3) Data frame sent\nI0626 00:25:51.443540 1753 log.go:172] (0xc000682500) (5) Data frame sent\nI0626 00:25:51.447172 1753 log.go:172] (0xc0007971e0) Data frame received for 3\nI0626 00:25:51.447197 1753 log.go:172] (0xc0007685a0) (3) Data frame handling\nI0626 00:25:51.447216 1753 log.go:172] (0xc0007685a0) (3) Data frame sent\nI0626 00:25:51.447608 1753 log.go:172] (0xc0007971e0) Data frame received for 5\nI0626 00:25:51.447627 1753 log.go:172] (0xc000682500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.178.246:80/\nI0626 00:25:51.447645 1753 log.go:172] (0xc0007971e0) Data frame received for 3\nI0626 00:25:51.447669 1753 log.go:172] (0xc0007685a0) (3) Data frame handling\nI0626 00:25:51.447685 1753 log.go:172] (0xc0007685a0) (3) Data frame sent\nI0626 00:25:51.447715 1753 log.go:172] (0xc000682500) (5) Data frame sent\nI0626 00:25:51.452219 1753 log.go:172] (0xc0007971e0) Data frame received for 3\nI0626 00:25:51.452251 1753 log.go:172] (0xc0007685a0) (3) Data frame handling\nI0626 00:25:51.452275 1753 log.go:172] (0xc0007685a0) (3) Data frame sent\nI0626 00:25:51.452451 1753 log.go:172] (0xc0007971e0) Data frame received for 3\nI0626 00:25:51.452469 1753 log.go:172] (0xc0007685a0) (3) Data frame handling\nI0626 00:25:51.452476 1753 log.go:172] (0xc0007685a0) (3) Data frame sent\nI0626 00:25:51.452486 1753 log.go:172] (0xc0007971e0) Data frame received for 5\nI0626 00:25:51.452495 1753 log.go:172] (0xc000682500) (5) Data frame handling\nI0626 00:25:51.452501 1753 log.go:172] (0xc000682500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.178.246:80/\nI0626 00:25:51.458824 1753 log.go:172] (0xc0007971e0) Data frame received for 3\nI0626 00:25:51.458851 1753 log.go:172] (0xc0007685a0) (3) Data frame handling\nI0626 00:25:51.458871 1753 log.go:172] (0xc0007685a0) (3) Data frame sent\nI0626 00:25:51.459058 1753 log.go:172] (0xc0007971e0) Data frame received for 3\nI0626 00:25:51.459074 1753 log.go:172] (0xc0007685a0) (3) Data frame handling\nI0626 00:25:51.459083 1753 log.go:172] (0xc0007685a0) (3) Data frame sent\nI0626 00:25:51.459099 1753 log.go:172] (0xc0007971e0) Data frame received for 5\nI0626 00:25:51.459110 1753 log.go:172] (0xc000682500) (5) Data frame handling\nI0626 00:25:51.459128 1753 log.go:172] (0xc000682500) (5) Data frame sent\nI0626 00:25:51.459138 1753 log.go:172] (0xc0007971e0) Data frame received for 5\nI0626 00:25:51.459147 1753 log.go:172] (0xc000682500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.178.246:80/\nI0626 00:25:51.459166 1753 log.go:172] (0xc000682500) (5) Data frame sent\nI0626 00:25:51.463170 1753 log.go:172] (0xc0007971e0) Data frame received for 3\nI0626 00:25:51.463190 1753 log.go:172] (0xc0007685a0) (3) Data frame handling\nI0626 00:25:51.463210 1753 log.go:172] (0xc0007685a0) (3) Data frame sent\nI0626 00:25:51.463768 1753 log.go:172] (0xc0007971e0) Data frame received for 3\nI0626 00:25:51.463800 1753 log.go:172] (0xc0007685a0) (3) Data frame handling\nI0626 00:25:51.463810 1753 log.go:172] (0xc0007685a0) (3) Data frame sent\nI0626 00:25:51.463825 1753 log.go:172] (0xc0007971e0) Data frame received for 5\nI0626 00:25:51.463834 1753 log.go:172] (0xc000682500) (5) Data frame handling\nI0626 00:25:51.463840 1753 log.go:172] (0xc000682500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.178.246:80/\nI0626 00:25:51.469908 1753 log.go:172] (0xc0007971e0) Data frame received for 5\nI0626 00:25:51.469931 1753 log.go:172] (0xc000682500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.178.246:80/\nI0626 00:25:51.469956 1753 log.go:172] (0xc0007971e0) Data frame received for 3\nI0626 00:25:51.469986 1753 log.go:172] (0xc0007685a0) (3) Data frame handling\nI0626 00:25:51.470000 1753 log.go:172] (0xc0007685a0) (3) Data frame sent\nI0626 00:25:51.470023 1753 log.go:172] (0xc0007971e0) Data frame received for 3\nI0626 00:25:51.470040 1753 log.go:172] (0xc0007685a0) (3) Data frame handling\nI0626 00:25:51.470053 1753 log.go:172] (0xc000682500) (5) Data frame sent\nI0626 00:25:51.470120 1753 log.go:172] (0xc0007685a0) (3) Data frame sent\nI0626 00:25:51.475263 1753 log.go:172] (0xc0007971e0) Data frame received for 3\nI0626 00:25:51.475285 1753 log.go:172] (0xc0007685a0) (3) Data frame handling\nI0626 00:25:51.475317 1753 log.go:172] (0xc0007685a0) (3) Data frame sent\nI0626 00:25:51.475793 1753 log.go:172] (0xc0007971e0) Data frame received for 5\nI0626 00:25:51.475827 1753 log.go:172] (0xc000682500) (5) Data frame handling\nI0626 00:25:51.475844 1753 log.go:172] (0xc000682500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.178.246:80/\nI0626 00:25:51.475868 1753 log.go:172] (0xc0007971e0) Data frame received for 3\nI0626 00:25:51.475882 1753 log.go:172] (0xc0007685a0) (3) Data frame handling\nI0626 00:25:51.475901 1753 log.go:172] (0xc0007685a0) (3) Data frame sent\nI0626 00:25:51.482205 1753 log.go:172] (0xc0007971e0) Data frame received for 5\nI0626 00:25:51.482241 1753 log.go:172] (0xc000682500) (5) Data frame handling\nI0626 00:25:51.482257 1753 log.go:172] (0xc000682500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.178.246:80/\nI0626 00:25:51.482292 1753 log.go:172] (0xc0007971e0) Data frame received for 3\nI0626 00:25:51.482317 1753 log.go:172] (0xc0007685a0) (3) Data frame handling\nI0626 00:25:51.482339 1753 log.go:172] (0xc0007685a0) (3) Data frame sent\nI0626 00:25:51.482360 1753 log.go:172] (0xc0007971e0) Data frame received for 3\nI0626 00:25:51.482376 1753 log.go:172] (0xc0007685a0) (3) Data frame handling\nI0626 00:25:51.482429 1753 log.go:172] (0xc0007685a0) (3) Data frame sent\nI0626 00:25:51.487280 1753 log.go:172] (0xc0007971e0) Data frame received for 3\nI0626 00:25:51.487316 1753 log.go:172] (0xc0007685a0) (3) Data frame handling\nI0626 00:25:51.487344 1753 log.go:172] (0xc0007685a0) (3) Data frame sent\nI0626 00:25:51.487825 1753 log.go:172] (0xc0007971e0) Data frame received for 5\nI0626 00:25:51.487842 1753 log.go:172] (0xc000682500) (5) Data frame handling\nI0626 00:25:51.487851 1753 log.go:172] (0xc000682500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.178.246:80/\nI0626 00:25:51.487870 1753 log.go:172] (0xc0007971e0) Data frame received for 3\nI0626 00:25:51.487899 1753 log.go:172] (0xc0007685a0) (3) Data frame handling\nI0626 00:25:51.487915 1753 log.go:172] (0xc0007685a0) (3) Data frame sent\nI0626 00:25:51.492628 1753 log.go:172] (0xc0007971e0) Data frame received for 3\nI0626 00:25:51.492652 1753 log.go:172] (0xc0007685a0) (3) Data frame handling\nI0626 00:25:51.492669 1753 log.go:172] (0xc0007685a0) (3) Data frame sent\nI0626 00:25:51.493432 1753 log.go:172] (0xc0007971e0) Data frame received for 3\nI0626 00:25:51.493455 1753 log.go:172] (0xc0007685a0) (3) Data frame handling\nI0626 00:25:51.493466 1753 log.go:172] (0xc0007685a0) (3) Data frame sent\nI0626 00:25:51.493492 1753 log.go:172] (0xc0007971e0) Data frame received for 5\nI0626 00:25:51.493511 1753 log.go:172] (0xc000682500) (5) Data frame handling\nI0626 00:25:51.493531 1753 log.go:172] (0xc000682500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.178.246:80/\nI0626 00:25:51.498294 1753 log.go:172] (0xc0007971e0) Data frame received for 3\nI0626 00:25:51.498312 1753 log.go:172] (0xc0007685a0) (3) Data frame handling\nI0626 00:25:51.498322 1753 log.go:172] (0xc0007685a0) (3) Data frame sent\nI0626 00:25:51.499394 1753 log.go:172] (0xc0007971e0) Data frame received for 5\nI0626 00:25:51.499426 1753 log.go:172] (0xc000682500) (5) Data frame handling\nI0626 00:25:51.499446 1753 log.go:172] (0xc0007971e0) Data frame received for 3\nI0626 00:25:51.499474 1753 log.go:172] (0xc0007685a0) (3) Data frame handling\nI0626 00:25:51.501088 1753 log.go:172] (0xc0007971e0) Data frame received for 1\nI0626 00:25:51.501104 1753 log.go:172] (0xc0000df9a0) (1) Data frame handling\nI0626 00:25:51.501260 1753 log.go:172] (0xc0000df9a0) (1) Data frame sent\nI0626 00:25:51.501276 1753 log.go:172] (0xc0007971e0) (0xc0000df9a0) Stream removed, broadcasting: 1\nI0626 00:25:51.501348 1753 log.go:172] (0xc0007971e0) Go away received\nI0626 00:25:51.501603 1753 log.go:172] (0xc0007971e0) (0xc0000df9a0) Stream removed, broadcasting: 1\nI0626 00:25:51.501620 1753 log.go:172] (0xc0007971e0) (0xc0007685a0) Stream removed, broadcasting: 3\nI0626 00:25:51.501628 1753 log.go:172] (0xc0007971e0) (0xc000682500) Stream removed, broadcasting: 5\n" Jun 26 00:25:51.509: INFO: stdout: "\naffinity-clusterip-timeout-zgjmv\naffinity-clusterip-timeout-zgjmv\naffinity-clusterip-timeout-zgjmv\naffinity-clusterip-timeout-zgjmv\naffinity-clusterip-timeout-zgjmv\naffinity-clusterip-timeout-zgjmv\naffinity-clusterip-timeout-zgjmv\naffinity-clusterip-timeout-zgjmv\naffinity-clusterip-timeout-zgjmv\naffinity-clusterip-timeout-zgjmv\naffinity-clusterip-timeout-zgjmv\naffinity-clusterip-timeout-zgjmv\naffinity-clusterip-timeout-zgjmv\naffinity-clusterip-timeout-zgjmv\naffinity-clusterip-timeout-zgjmv\naffinity-clusterip-timeout-zgjmv" Jun 26 00:25:51.509: INFO: Received response from host: Jun 26 00:25:51.509: INFO: Received response from host: affinity-clusterip-timeout-zgjmv Jun 26 00:25:51.509: INFO: Received response from host: affinity-clusterip-timeout-zgjmv Jun 26 00:25:51.509: INFO: Received response from host: affinity-clusterip-timeout-zgjmv Jun 26 00:25:51.509: INFO: Received response from host: affinity-clusterip-timeout-zgjmv Jun 26 00:25:51.509: INFO: Received response from host: affinity-clusterip-timeout-zgjmv Jun 26 00:25:51.509: INFO: Received response from host: affinity-clusterip-timeout-zgjmv Jun 26 00:25:51.509: INFO: Received response from host: affinity-clusterip-timeout-zgjmv Jun 26 00:25:51.509: INFO: Received response from host: affinity-clusterip-timeout-zgjmv Jun 26 00:25:51.509: INFO: Received response from host: affinity-clusterip-timeout-zgjmv Jun 26 00:25:51.509: INFO: Received response from host: affinity-clusterip-timeout-zgjmv Jun 26 00:25:51.509: INFO: Received response from host: affinity-clusterip-timeout-zgjmv Jun 26 00:25:51.509: INFO: Received response from host: affinity-clusterip-timeout-zgjmv Jun 26 00:25:51.509: INFO: Received response from host: affinity-clusterip-timeout-zgjmv Jun 26 00:25:51.509: INFO: Received response from host: affinity-clusterip-timeout-zgjmv Jun 26 00:25:51.509: INFO: Received response from host: affinity-clusterip-timeout-zgjmv Jun 26 00:25:51.509: INFO: Received response from host: affinity-clusterip-timeout-zgjmv Jun 26 00:25:51.509: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4700 execpod-affinitym4crt -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.106.178.246:80/' Jun 26 00:25:51.742: INFO: stderr: "I0626 00:25:51.653058 1773 log.go:172] (0xc00003b8c0) (0xc000862fa0) Create stream\nI0626 00:25:51.653338 1773 log.go:172] (0xc00003b8c0) (0xc000862fa0) Stream added, broadcasting: 1\nI0626 00:25:51.655370 1773 log.go:172] (0xc00003b8c0) Reply frame received for 1\nI0626 00:25:51.655408 1773 log.go:172] (0xc00003b8c0) (0xc00086d7c0) Create stream\nI0626 00:25:51.655416 1773 log.go:172] (0xc00003b8c0) (0xc00086d7c0) Stream added, broadcasting: 3\nI0626 00:25:51.656485 1773 log.go:172] (0xc00003b8c0) Reply frame received for 3\nI0626 00:25:51.656527 1773 log.go:172] (0xc00003b8c0) (0xc00085c780) Create stream\nI0626 00:25:51.656540 1773 log.go:172] (0xc00003b8c0) (0xc00085c780) Stream added, broadcasting: 5\nI0626 00:25:51.657679 1773 log.go:172] (0xc00003b8c0) Reply frame received for 5\nI0626 00:25:51.726763 1773 log.go:172] (0xc00003b8c0) Data frame received for 5\nI0626 00:25:51.726795 1773 log.go:172] (0xc00085c780) (5) Data frame handling\nI0626 00:25:51.726814 1773 log.go:172] (0xc00085c780) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.106.178.246:80/\nI0626 00:25:51.732884 1773 log.go:172] (0xc00003b8c0) Data frame received for 3\nI0626 00:25:51.732919 1773 log.go:172] (0xc00086d7c0) (3) Data frame handling\nI0626 00:25:51.732937 1773 log.go:172] (0xc00086d7c0) (3) Data frame sent\nI0626 00:25:51.734095 1773 log.go:172] (0xc00003b8c0) Data frame received for 5\nI0626 00:25:51.734123 1773 log.go:172] (0xc00085c780) (5) Data frame handling\nI0626 00:25:51.734380 1773 log.go:172] (0xc00003b8c0) Data frame received for 3\nI0626 00:25:51.734427 1773 log.go:172] (0xc00086d7c0) (3) Data frame handling\nI0626 00:25:51.735931 1773 log.go:172] (0xc00003b8c0) Data frame received for 1\nI0626 00:25:51.735966 1773 log.go:172] (0xc000862fa0) (1) Data frame handling\nI0626 00:25:51.735990 1773 log.go:172] (0xc000862fa0) (1) Data frame sent\nI0626 00:25:51.736013 1773 log.go:172] (0xc00003b8c0) (0xc000862fa0) Stream removed, broadcasting: 1\nI0626 00:25:51.736030 1773 log.go:172] (0xc00003b8c0) Go away received\nI0626 00:25:51.736488 1773 log.go:172] (0xc00003b8c0) (0xc000862fa0) Stream removed, broadcasting: 1\nI0626 00:25:51.736508 1773 log.go:172] (0xc00003b8c0) (0xc00086d7c0) Stream removed, broadcasting: 3\nI0626 00:25:51.736518 1773 log.go:172] (0xc00003b8c0) (0xc00085c780) Stream removed, broadcasting: 5\n" Jun 26 00:25:51.743: INFO: stdout: "affinity-clusterip-timeout-zgjmv" Jun 26 00:26:06.743: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4700 execpod-affinitym4crt -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.106.178.246:80/' Jun 26 00:26:06.983: INFO: stderr: "I0626 00:26:06.883756 1793 log.go:172] (0xc000bb7130) (0xc000aee1e0) Create stream\nI0626 00:26:06.883826 1793 log.go:172] (0xc000bb7130) (0xc000aee1e0) Stream added, broadcasting: 1\nI0626 00:26:06.890306 1793 log.go:172] (0xc000bb7130) Reply frame received for 1\nI0626 00:26:06.890349 1793 log.go:172] (0xc000bb7130) (0xc000858280) Create stream\nI0626 00:26:06.890360 1793 log.go:172] (0xc000bb7130) (0xc000858280) Stream added, broadcasting: 3\nI0626 00:26:06.891469 1793 log.go:172] (0xc000bb7130) Reply frame received for 3\nI0626 00:26:06.891519 1793 log.go:172] (0xc000bb7130) (0xc0005d2960) Create stream\nI0626 00:26:06.891535 1793 log.go:172] (0xc000bb7130) (0xc0005d2960) Stream added, broadcasting: 5\nI0626 00:26:06.892521 1793 log.go:172] (0xc000bb7130) Reply frame received for 5\nI0626 00:26:06.974488 1793 log.go:172] (0xc000bb7130) Data frame received for 5\nI0626 00:26:06.974516 1793 log.go:172] (0xc0005d2960) (5) Data frame handling\nI0626 00:26:06.974536 1793 log.go:172] (0xc0005d2960) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.106.178.246:80/\nI0626 00:26:06.977085 1793 log.go:172] (0xc000bb7130) Data frame received for 3\nI0626 00:26:06.977107 1793 log.go:172] (0xc000858280) (3) Data frame handling\nI0626 00:26:06.977260 1793 log.go:172] (0xc000858280) (3) Data frame sent\nI0626 00:26:06.977819 1793 log.go:172] (0xc000bb7130) Data frame received for 5\nI0626 00:26:06.977850 1793 log.go:172] (0xc0005d2960) (5) Data frame handling\nI0626 00:26:06.977876 1793 log.go:172] (0xc000bb7130) Data frame received for 3\nI0626 00:26:06.977888 1793 log.go:172] (0xc000858280) (3) Data frame handling\nI0626 00:26:06.979120 1793 log.go:172] (0xc000bb7130) Data frame received for 1\nI0626 00:26:06.979143 1793 log.go:172] (0xc000aee1e0) (1) Data frame handling\nI0626 00:26:06.979157 1793 log.go:172] (0xc000aee1e0) (1) Data frame sent\nI0626 00:26:06.979184 1793 log.go:172] (0xc000bb7130) (0xc000aee1e0) Stream removed, broadcasting: 1\nI0626 00:26:06.979213 1793 log.go:172] (0xc000bb7130) Go away received\nI0626 00:26:06.979512 1793 log.go:172] (0xc000bb7130) (0xc000aee1e0) Stream removed, broadcasting: 1\nI0626 00:26:06.979530 1793 log.go:172] (0xc000bb7130) (0xc000858280) Stream removed, broadcasting: 3\nI0626 00:26:06.979538 1793 log.go:172] (0xc000bb7130) (0xc0005d2960) Stream removed, broadcasting: 5\n" Jun 26 00:26:06.983: INFO: stdout: "affinity-clusterip-timeout-zgjmv" Jun 26 00:26:21.983: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4700 execpod-affinitym4crt -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.106.178.246:80/' Jun 26 00:26:22.237: INFO: stderr: "I0626 00:26:22.126870 1812 log.go:172] (0xc000bae4d0) (0xc0006efd60) Create stream\nI0626 00:26:22.126927 1812 log.go:172] (0xc000bae4d0) (0xc0006efd60) Stream added, broadcasting: 1\nI0626 00:26:22.130044 1812 log.go:172] (0xc000bae4d0) Reply frame received for 1\nI0626 00:26:22.130110 1812 log.go:172] (0xc000bae4d0) (0xc0005efcc0) Create stream\nI0626 00:26:22.130136 1812 log.go:172] (0xc000bae4d0) (0xc0005efcc0) Stream added, broadcasting: 3\nI0626 00:26:22.131085 1812 log.go:172] (0xc000bae4d0) Reply frame received for 3\nI0626 00:26:22.131133 1812 log.go:172] (0xc000bae4d0) (0xc0006f2dc0) Create stream\nI0626 00:26:22.131142 1812 log.go:172] (0xc000bae4d0) (0xc0006f2dc0) Stream added, broadcasting: 5\nI0626 00:26:22.132049 1812 log.go:172] (0xc000bae4d0) Reply frame received for 5\nI0626 00:26:22.224662 1812 log.go:172] (0xc000bae4d0) Data frame received for 5\nI0626 00:26:22.224695 1812 log.go:172] (0xc0006f2dc0) (5) Data frame handling\nI0626 00:26:22.224743 1812 log.go:172] (0xc0006f2dc0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.106.178.246:80/\nI0626 00:26:22.227454 1812 log.go:172] (0xc000bae4d0) Data frame received for 3\nI0626 00:26:22.227480 1812 log.go:172] (0xc0005efcc0) (3) Data frame handling\nI0626 00:26:22.227499 1812 log.go:172] (0xc0005efcc0) (3) Data frame sent\nI0626 00:26:22.227686 1812 log.go:172] (0xc000bae4d0) Data frame received for 3\nI0626 00:26:22.227721 1812 log.go:172] (0xc0005efcc0) (3) Data frame handling\nI0626 00:26:22.227904 1812 log.go:172] (0xc000bae4d0) Data frame received for 5\nI0626 00:26:22.227923 1812 log.go:172] (0xc0006f2dc0) (5) Data frame handling\nI0626 00:26:22.229973 1812 log.go:172] (0xc000bae4d0) Data frame received for 1\nI0626 00:26:22.229997 1812 log.go:172] (0xc0006efd60) (1) Data frame handling\nI0626 00:26:22.230022 1812 log.go:172] (0xc0006efd60) (1) Data frame sent\nI0626 00:26:22.230076 1812 log.go:172] (0xc000bae4d0) (0xc0006efd60) Stream removed, broadcasting: 1\nI0626 00:26:22.230257 1812 log.go:172] (0xc000bae4d0) Go away received\nI0626 00:26:22.231029 1812 log.go:172] (0xc000bae4d0) (0xc0006efd60) Stream removed, broadcasting: 1\nI0626 00:26:22.231065 1812 log.go:172] (0xc000bae4d0) (0xc0005efcc0) Stream removed, broadcasting: 3\nI0626 00:26:22.231102 1812 log.go:172] (0xc000bae4d0) (0xc0006f2dc0) Stream removed, broadcasting: 5\n" Jun 26 00:26:22.237: INFO: stdout: "affinity-clusterip-timeout-zgjmv" Jun 26 00:26:37.237: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4700 execpod-affinitym4crt -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.106.178.246:80/' Jun 26 00:26:37.496: INFO: stderr: "I0626 00:26:37.368857 1831 log.go:172] (0xc000854000) (0xc000885220) Create stream\nI0626 00:26:37.368927 1831 log.go:172] (0xc000854000) (0xc000885220) Stream added, broadcasting: 1\nI0626 00:26:37.371629 1831 log.go:172] (0xc000854000) Reply frame received for 1\nI0626 00:26:37.371670 1831 log.go:172] (0xc000854000) (0xc0008685a0) Create stream\nI0626 00:26:37.371684 1831 log.go:172] (0xc000854000) (0xc0008685a0) Stream added, broadcasting: 3\nI0626 00:26:37.372503 1831 log.go:172] (0xc000854000) Reply frame received for 3\nI0626 00:26:37.372533 1831 log.go:172] (0xc000854000) (0xc0006c0280) Create stream\nI0626 00:26:37.372543 1831 log.go:172] (0xc000854000) (0xc0006c0280) Stream added, broadcasting: 5\nI0626 00:26:37.373542 1831 log.go:172] (0xc000854000) Reply frame received for 5\nI0626 00:26:37.472212 1831 log.go:172] (0xc000854000) Data frame received for 5\nI0626 00:26:37.472230 1831 log.go:172] (0xc0006c0280) (5) Data frame handling\nI0626 00:26:37.472240 1831 log.go:172] (0xc0006c0280) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.106.178.246:80/\nI0626 00:26:37.488820 1831 log.go:172] (0xc000854000) Data frame received for 3\nI0626 00:26:37.488842 1831 log.go:172] (0xc0008685a0) (3) Data frame handling\nI0626 00:26:37.488863 1831 log.go:172] (0xc0008685a0) (3) Data frame sent\nI0626 00:26:37.489738 1831 log.go:172] (0xc000854000) Data frame received for 5\nI0626 00:26:37.489759 1831 log.go:172] (0xc0006c0280) (5) Data frame handling\nI0626 00:26:37.489782 1831 log.go:172] (0xc000854000) Data frame received for 3\nI0626 00:26:37.489791 1831 log.go:172] (0xc0008685a0) (3) Data frame handling\nI0626 00:26:37.491017 1831 log.go:172] (0xc000854000) Data frame received for 1\nI0626 00:26:37.491033 1831 log.go:172] (0xc000885220) (1) Data frame handling\nI0626 00:26:37.491043 1831 log.go:172] (0xc000885220) (1) Data frame sent\nI0626 00:26:37.491161 1831 log.go:172] (0xc000854000) (0xc000885220) Stream removed, broadcasting: 1\nI0626 00:26:37.491212 1831 log.go:172] (0xc000854000) Go away received\nI0626 00:26:37.491654 1831 log.go:172] (0xc000854000) (0xc000885220) Stream removed, broadcasting: 1\nI0626 00:26:37.491680 1831 log.go:172] (0xc000854000) (0xc0008685a0) Stream removed, broadcasting: 3\nI0626 00:26:37.491708 1831 log.go:172] (0xc000854000) (0xc0006c0280) Stream removed, broadcasting: 5\n" Jun 26 00:26:37.496: INFO: stdout: "affinity-clusterip-timeout-gk929" Jun 26 00:26:37.496: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-4700, will wait for the garbage collector to delete the pods Jun 26 00:26:37.619: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 5.065764ms Jun 26 00:26:38.019: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 400.216736ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:26:45.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4700" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:813 • [SLOW TEST:84.321 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":294,"completed":135,"skipped":2035,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} S ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:26:45.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 26 00:26:45.599: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b55b75d8-c595-4c35-8136-18ec5d37baf1" in namespace "projected-870" to be "Succeeded or Failed" Jun 26 00:26:45.603: INFO: Pod "downwardapi-volume-b55b75d8-c595-4c35-8136-18ec5d37baf1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.912285ms Jun 26 00:26:47.607: INFO: Pod "downwardapi-volume-b55b75d8-c595-4c35-8136-18ec5d37baf1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007549114s Jun 26 00:26:49.611: INFO: Pod "downwardapi-volume-b55b75d8-c595-4c35-8136-18ec5d37baf1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011873064s STEP: Saw pod success Jun 26 00:26:49.611: INFO: Pod "downwardapi-volume-b55b75d8-c595-4c35-8136-18ec5d37baf1" satisfied condition "Succeeded or Failed" Jun 26 00:26:49.615: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-b55b75d8-c595-4c35-8136-18ec5d37baf1 container client-container: STEP: delete the pod Jun 26 00:26:49.665: INFO: Waiting for pod downwardapi-volume-b55b75d8-c595-4c35-8136-18ec5d37baf1 to disappear Jun 26 00:26:49.685: INFO: Pod downwardapi-volume-b55b75d8-c595-4c35-8136-18ec5d37baf1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:26:49.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-870" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":294,"completed":136,"skipped":2036,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:26:49.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:26:53.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9544" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":294,"completed":137,"skipped":2088,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:26:53.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Jun 26 00:26:58.005: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-8886 PodName:var-expansion-ad5080b1-ecc7-4a6b-905d-af190485d58a ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 00:26:58.005: INFO: >>> kubeConfig: /root/.kube/config I0626 00:26:58.040688 8 log.go:172] (0xc004e326e0) (0xc001d46e60) Create stream I0626 00:26:58.040726 8 log.go:172] (0xc004e326e0) (0xc001d46e60) Stream added, broadcasting: 1 I0626 00:26:58.043665 8 log.go:172] (0xc004e326e0) Reply frame received for 1 I0626 00:26:58.043713 8 log.go:172] (0xc004e326e0) (0xc002dd8b40) Create stream I0626 00:26:58.043724 8 log.go:172] (0xc004e326e0) (0xc002dd8b40) Stream added, broadcasting: 3 I0626 00:26:58.044639 8 log.go:172] (0xc004e326e0) Reply frame received for 3 I0626 00:26:58.044682 8 log.go:172] (0xc004e326e0) (0xc001d46f00) Create stream I0626 00:26:58.044697 8 log.go:172] (0xc004e326e0) (0xc001d46f00) Stream added, broadcasting: 5 I0626 00:26:58.045793 8 log.go:172] (0xc004e326e0) Reply frame received for 5 I0626 00:26:58.109311 8 log.go:172] (0xc004e326e0) Data frame received for 3 I0626 00:26:58.109344 8 log.go:172] (0xc002dd8b40) (3) Data frame handling I0626 00:26:58.109363 8 log.go:172] (0xc004e326e0) Data frame received for 5 I0626 00:26:58.109372 8 log.go:172] (0xc001d46f00) (5) Data frame handling I0626 00:26:58.110711 8 log.go:172] (0xc004e326e0) Data frame received for 1 I0626 00:26:58.110733 8 log.go:172] (0xc001d46e60) (1) Data frame handling I0626 00:26:58.110745 8 log.go:172] (0xc001d46e60) (1) Data frame sent I0626 00:26:58.110811 8 log.go:172] (0xc004e326e0) (0xc001d46e60) Stream removed, broadcasting: 1 I0626 00:26:58.110842 8 log.go:172] (0xc004e326e0) Go away received I0626 00:26:58.110904 8 log.go:172] (0xc004e326e0) (0xc001d46e60) Stream removed, broadcasting: 1 I0626 00:26:58.110920 8 log.go:172] (0xc004e326e0) (0xc002dd8b40) Stream removed, broadcasting: 3 I0626 00:26:58.110935 8 log.go:172] (0xc004e326e0) (0xc001d46f00) Stream removed, broadcasting: 5 STEP: test for file in mounted path Jun 26 00:26:58.114: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-8886 PodName:var-expansion-ad5080b1-ecc7-4a6b-905d-af190485d58a ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 00:26:58.114: INFO: >>> kubeConfig: /root/.kube/config I0626 00:26:58.142777 8 log.go:172] (0xc0010de840) (0xc002dd8fa0) Create stream I0626 00:26:58.142822 8 log.go:172] (0xc0010de840) (0xc002dd8fa0) Stream added, broadcasting: 1 I0626 00:26:58.144607 8 log.go:172] (0xc0010de840) Reply frame received for 1 I0626 00:26:58.144636 8 log.go:172] (0xc0010de840) (0xc00194c0a0) Create stream I0626 00:26:58.144646 8 log.go:172] (0xc0010de840) (0xc00194c0a0) Stream added, broadcasting: 3 I0626 00:26:58.146257 8 log.go:172] (0xc0010de840) Reply frame received for 3 I0626 00:26:58.146299 8 log.go:172] (0xc0010de840) (0xc002dd9180) Create stream I0626 00:26:58.146318 8 log.go:172] (0xc0010de840) (0xc002dd9180) Stream added, broadcasting: 5 I0626 00:26:58.147124 8 log.go:172] (0xc0010de840) Reply frame received for 5 I0626 00:26:58.206299 8 log.go:172] (0xc0010de840) Data frame received for 5 I0626 00:26:58.206327 8 log.go:172] (0xc002dd9180) (5) Data frame handling I0626 00:26:58.206345 8 log.go:172] (0xc0010de840) Data frame received for 3 I0626 00:26:58.206350 8 log.go:172] (0xc00194c0a0) (3) Data frame handling I0626 00:26:58.208044 8 log.go:172] (0xc0010de840) Data frame received for 1 I0626 00:26:58.208079 8 log.go:172] (0xc002dd8fa0) (1) Data frame handling I0626 00:26:58.208122 8 log.go:172] (0xc002dd8fa0) (1) Data frame sent I0626 00:26:58.208142 8 log.go:172] (0xc0010de840) (0xc002dd8fa0) Stream removed, broadcasting: 1 I0626 00:26:58.208161 8 log.go:172] (0xc0010de840) Go away received I0626 00:26:58.208292 8 log.go:172] (0xc0010de840) (0xc002dd8fa0) Stream removed, broadcasting: 1 I0626 00:26:58.208363 8 log.go:172] (0xc0010de840) (0xc00194c0a0) Stream removed, broadcasting: 3 I0626 00:26:58.208397 8 log.go:172] (0xc0010de840) (0xc002dd9180) Stream removed, broadcasting: 5 STEP: updating the annotation value Jun 26 00:26:58.718: INFO: Successfully updated pod "var-expansion-ad5080b1-ecc7-4a6b-905d-af190485d58a" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Jun 26 00:26:58.768: INFO: Deleting pod "var-expansion-ad5080b1-ecc7-4a6b-905d-af190485d58a" in namespace "var-expansion-8886" Jun 26 00:26:58.773: INFO: Wait up to 5m0s for pod "var-expansion-ad5080b1-ecc7-4a6b-905d-af190485d58a" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:27:32.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8886" for this suite. • [SLOW TEST:38.970 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":294,"completed":138,"skipped":2099,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:27:32.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 26 00:27:32.876: INFO: Creating deployment "webserver-deployment" Jun 26 00:27:32.885: INFO: Waiting for observed generation 1 Jun 26 00:27:35.020: INFO: Waiting for all required pods to come up Jun 26 00:27:35.025: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Jun 26 00:27:47.096: INFO: Waiting for deployment "webserver-deployment" to complete Jun 26 00:27:47.130: INFO: Updating deployment "webserver-deployment" with a non-existent image Jun 26 00:27:47.139: INFO: Updating deployment webserver-deployment Jun 26 00:27:47.139: INFO: Waiting for observed generation 2 Jun 26 00:27:49.187: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jun 26 00:27:49.190: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jun 26 00:27:49.269: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jun 26 00:27:49.439: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jun 26 00:27:49.439: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jun 26 00:27:49.442: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jun 26 00:27:49.446: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Jun 26 00:27:49.446: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Jun 26 00:27:49.452: INFO: Updating deployment webserver-deployment Jun 26 00:27:49.452: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Jun 26 00:27:50.020: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jun 26 00:27:50.024: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Jun 26 00:27:52.834: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-7381 /apis/apps/v1/namespaces/deployment-7381/deployments/webserver-deployment 3f9381aa-afb1-478f-87d3-290a670901f0 15918165 3 2020-06-26 00:27:32 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-06-26 00:27:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-06-26 00:27:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002c931e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-06-26 00:27:50 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-06-26 00:27:50 +0000 UTC,LastTransitionTime:2020-06-26 00:27:32 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Jun 26 00:27:53.519: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4 deployment-7381 /apis/apps/v1/namespaces/deployment-7381/replicasets/webserver-deployment-6676bcd6d4 538f69b9-ff37-4221-a41b-1eec628982d7 15918163 3 2020-06-26 00:27:47 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 3f9381aa-afb1-478f-87d3-290a670901f0 0xc002c93907 0xc002c93908}] [] [{kube-controller-manager Update apps/v1 2020-06-26 00:27:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3f9381aa-afb1-478f-87d3-290a670901f0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002c939a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 26 00:27:53.519: INFO: All old ReplicaSets of Deployment "webserver-deployment": Jun 26 00:27:53.519: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797 deployment-7381 /apis/apps/v1/namespaces/deployment-7381/replicasets/webserver-deployment-84855cf797 a86fb915-12a2-45b2-a4ba-14a0cc6e683c 15918141 3 2020-06-26 00:27:32 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 3f9381aa-afb1-478f-87d3-290a670901f0 0xc002c93a67 0xc002c93a68}] [] [{kube-controller-manager Update apps/v1 2020-06-26 00:27:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3f9381aa-afb1-478f-87d3-290a670901f0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002c93b08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Jun 26 00:27:53.527: INFO: Pod "webserver-deployment-6676bcd6d4-4vk2c" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-4vk2c webserver-deployment-6676bcd6d4- deployment-7381 /api/v1/namespaces/deployment-7381/pods/webserver-deployment-6676bcd6d4-4vk2c d1ff5de4-74ee-4f7e-9d5d-98366792f465 15918065 0 2020-06-26 00:27:47 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 538f69b9-ff37-4221-a41b-1eec628982d7 0xc00251a077 0xc00251a078}] [] [{kube-controller-manager Update v1 2020-06-26 00:27:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"538f69b9-ff37-4221-a41b-1eec628982d7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-26 00:27:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qfkms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qfkms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qfkms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-06-26 00:27:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 00:27:53.527: INFO: Pod "webserver-deployment-6676bcd6d4-5bmg9" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-5bmg9 webserver-deployment-6676bcd6d4- deployment-7381 /api/v1/namespaces/deployment-7381/pods/webserver-deployment-6676bcd6d4-5bmg9 eccc2983-d971-4268-9f38-9889440c0db0 15918201 0 2020-06-26 00:27:50 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 538f69b9-ff37-4221-a41b-1eec628982d7 0xc00251a237 0xc00251a238}] [] [{kube-controller-manager Update v1 2020-06-26 00:27:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"538f69b9-ff37-4221-a41b-1eec628982d7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-26 00:27:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qfkms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qfkms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qfkms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-06-26 00:27:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 00:27:53.527: INFO: Pod "webserver-deployment-6676bcd6d4-7d7xh" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-7d7xh webserver-deployment-6676bcd6d4- deployment-7381 /api/v1/namespaces/deployment-7381/pods/webserver-deployment-6676bcd6d4-7d7xh 7a14cc59-aa7a-4574-b92a-9183e229883a 15918161 0 2020-06-26 00:27:50 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 538f69b9-ff37-4221-a41b-1eec628982d7 0xc00251a3f7 0xc00251a3f8}] [] [{kube-controller-manager Update v1 2020-06-26 00:27:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"538f69b9-ff37-4221-a41b-1eec628982d7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-26 00:27:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qfkms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qfkms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qfkms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-06-26 00:27:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 00:27:53.527: INFO: Pod "webserver-deployment-6676bcd6d4-8s6p5" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-8s6p5 webserver-deployment-6676bcd6d4- deployment-7381 /api/v1/namespaces/deployment-7381/pods/webserver-deployment-6676bcd6d4-8s6p5 ad24e77f-8fc7-47ba-8ea8-cfc2776a98c0 15918067 0 2020-06-26 00:27:47 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 538f69b9-ff37-4221-a41b-1eec628982d7 0xc00251a5a7 0xc00251a5a8}] [] [{kube-controller-manager Update v1 2020-06-26 00:27:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"538f69b9-ff37-4221-a41b-1eec628982d7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-26 00:27:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qfkms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qfkms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qfkms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-06-26 00:27:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 00:27:53.528: INFO: Pod "webserver-deployment-6676bcd6d4-9g9fs" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-9g9fs webserver-deployment-6676bcd6d4- deployment-7381 /api/v1/namespaces/deployment-7381/pods/webserver-deployment-6676bcd6d4-9g9fs aa36bef2-cd96-4c56-909b-66b6f3069297 15918188 0 2020-06-26 00:27:50 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 538f69b9-ff37-4221-a41b-1eec628982d7 0xc00251a767 0xc00251a768}] [] [{kube-controller-manager Update v1 2020-06-26 00:27:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"538f69b9-ff37-4221-a41b-1eec628982d7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-26 00:27:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qfkms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qfkms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qfkms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-06-26 00:27:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 00:27:53.528: INFO: Pod "webserver-deployment-6676bcd6d4-b2c6p" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-b2c6p webserver-deployment-6676bcd6d4- deployment-7381 /api/v1/namespaces/deployment-7381/pods/webserver-deployment-6676bcd6d4-b2c6p 2bd12e11-5d58-4e9f-abc6-b0ce819def2e 15918204 0 2020-06-26 00:27:50 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 538f69b9-ff37-4221-a41b-1eec628982d7 0xc00251a937 0xc00251a938}] [] [{kube-controller-manager Update v1 2020-06-26 00:27:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"538f69b9-ff37-4221-a41b-1eec628982d7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-26 00:27:52 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qfkms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qfkms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qfkms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-06-26 00:27:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 00:27:53.528: INFO: Pod "webserver-deployment-6676bcd6d4-d7ch2" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-d7ch2 webserver-deployment-6676bcd6d4- deployment-7381 /api/v1/namespaces/deployment-7381/pods/webserver-deployment-6676bcd6d4-d7ch2 a2d2cd40-b4e0-4d0d-93f9-65da0e932d2a 15918170 0 2020-06-26 00:27:50 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 538f69b9-ff37-4221-a41b-1eec628982d7 0xc00251aae7 0xc00251aae8}] [] [{kube-controller-manager Update v1 2020-06-26 00:27:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"538f69b9-ff37-4221-a41b-1eec628982d7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-26 00:27:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qfkms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qfkms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qfkms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-06-26 00:27:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 00:27:53.528: INFO: Pod "webserver-deployment-6676bcd6d4-fnbcd" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-fnbcd webserver-deployment-6676bcd6d4- deployment-7381 /api/v1/namespaces/deployment-7381/pods/webserver-deployment-6676bcd6d4-fnbcd 212dc85c-5353-4fa6-87a8-065442518b6e 15918049 0 2020-06-26 00:27:47 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 538f69b9-ff37-4221-a41b-1eec628982d7 0xc00251acc7 0xc00251acc8}] [] [{kube-controller-manager Update v1 2020-06-26 00:27:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"538f69b9-ff37-4221-a41b-1eec628982d7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-26 00:27:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qfkms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qfkms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qfkms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-06-26 00:27:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 00:27:53.529: INFO: Pod "webserver-deployment-6676bcd6d4-n4c7r" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-n4c7r webserver-deployment-6676bcd6d4- deployment-7381 /api/v1/namespaces/deployment-7381/pods/webserver-deployment-6676bcd6d4-n4c7r d2d09937-6764-4b82-813f-8ba6f2a8fb68 15918169 0 2020-06-26 00:27:47 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 538f69b9-ff37-4221-a41b-1eec628982d7 0xc00251ae77 0xc00251ae78}] [] [{kube-controller-manager Update v1 2020-06-26 00:27:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"538f69b9-ff37-4221-a41b-1eec628982d7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-26 00:27:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.139\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qfkms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qfkms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qfkms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.139,StartTime:2020-06-26 00:27:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.139,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 00:27:53.529: INFO: Pod "webserver-deployment-6676bcd6d4-nf6fb" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-nf6fb webserver-deployment-6676bcd6d4- deployment-7381 /api/v1/namespaces/deployment-7381/pods/webserver-deployment-6676bcd6d4-nf6fb 4f8a5882-4fac-4234-bc49-7bf821063085 15918183 0 2020-06-26 00:27:50 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 538f69b9-ff37-4221-a41b-1eec628982d7 0xc00251b077 0xc00251b078}] [] [{kube-controller-manager Update v1 2020-06-26 00:27:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"538f69b9-ff37-4221-a41b-1eec628982d7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-26 00:27:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qfkms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qfkms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qfkms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-06-26 00:27:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 00:27:53.529: INFO: Pod "webserver-deployment-6676bcd6d4-pfx56" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-pfx56 webserver-deployment-6676bcd6d4- deployment-7381 /api/v1/namespaces/deployment-7381/pods/webserver-deployment-6676bcd6d4-pfx56 961f1977-9952-45e0-bff0-a7d062782f9f 15918213 0 2020-06-26 00:27:50 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 538f69b9-ff37-4221-a41b-1eec628982d7 0xc00251b227 0xc00251b228}] [] [{kube-controller-manager Update v1 2020-06-26 00:27:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"538f69b9-ff37-4221-a41b-1eec628982d7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-26 00:27:52 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qfkms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qfkms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qfkms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-06-26 00:27:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 00:27:53.529: INFO: Pod "webserver-deployment-6676bcd6d4-qrzpp" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-qrzpp webserver-deployment-6676bcd6d4- deployment-7381 /api/v1/namespaces/deployment-7381/pods/webserver-deployment-6676bcd6d4-qrzpp 2dbc38b2-b789-4814-b93a-5032db6cce61 15918175 0 2020-06-26 00:27:50 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 538f69b9-ff37-4221-a41b-1eec628982d7 0xc00251b3d7 0xc00251b3d8}] [] [{kube-controller-manager Update v1 2020-06-26 00:27:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"538f69b9-ff37-4221-a41b-1eec628982d7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-26 00:27:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qfkms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qfkms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qfkms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-06-26 00:27:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 00:27:53.530: INFO: Pod "webserver-deployment-6676bcd6d4-zv54l" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-zv54l webserver-deployment-6676bcd6d4- deployment-7381 /api/v1/namespaces/deployment-7381/pods/webserver-deployment-6676bcd6d4-zv54l 5dd32d11-ef9c-463d-bbfb-60227e70ced1 15918044 0 2020-06-26 00:27:47 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 538f69b9-ff37-4221-a41b-1eec628982d7 0xc00251b587 0xc00251b588}] [] [{kube-controller-manager Update v1 2020-06-26 00:27:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"538f69b9-ff37-4221-a41b-1eec628982d7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-26 00:27:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qfkms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qfkms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qfkms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-06-26 00:27:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 00:27:53.530: INFO: Pod "webserver-deployment-84855cf797-2c4ps" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-2c4ps webserver-deployment-84855cf797- deployment-7381 /api/v1/namespaces/deployment-7381/pods/webserver-deployment-84855cf797-2c4ps fd9c1132-1c90-4a0c-86c1-7fd09d504493 15918197 0 2020-06-26 00:27:50 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a86fb915-12a2-45b2-a4ba-14a0cc6e683c 0xc00251b737 0xc00251b738}] [] [{kube-controller-manager Update v1 2020-06-26 00:27:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a86fb915-12a2-45b2-a4ba-14a0cc6e683c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-26 00:27:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qfkms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qfkms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qfkms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-06-26 00:27:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 00:27:53.530: INFO: Pod "webserver-deployment-84855cf797-2n7mz" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-2n7mz webserver-deployment-84855cf797- deployment-7381 /api/v1/namespaces/deployment-7381/pods/webserver-deployment-84855cf797-2n7mz 2e6a7f1e-b64f-4486-9457-c226bf4ff1c2 15918180 0 2020-06-26 00:27:50 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a86fb915-12a2-45b2-a4ba-14a0cc6e683c 0xc00251b8c7 0xc00251b8c8}] [] [{kube-controller-manager Update v1 2020-06-26 00:27:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a86fb915-12a2-45b2-a4ba-14a0cc6e683c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-26 00:27:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qfkms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qfkms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qfkms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-06-26 00:27:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 00:27:53.530: INFO: Pod "webserver-deployment-84855cf797-5cvps" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-5cvps webserver-deployment-84855cf797- deployment-7381 /api/v1/namespaces/deployment-7381/pods/webserver-deployment-84855cf797-5cvps c92d95c4-06e1-4fd2-8baa-10c4b4d132a3 15917996 0 2020-06-26 00:27:33 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a86fb915-12a2-45b2-a4ba-14a0cc6e683c 0xc00251ba57 0xc00251ba58}] [] [{kube-controller-manager Update v1 2020-06-26 00:27:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a86fb915-12a2-45b2-a4ba-14a0cc6e683c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-26 00:27:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.138\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qfkms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qfkms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qfkms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.138,StartTime:2020-06-26 00:27:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-26 00:27:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://af1563af5600f7f5ef9385e125fb7306936a5e35188a82eefd88b1b3f7ba5107,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.138,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 00:27:53.531: INFO: Pod "webserver-deployment-84855cf797-756qv" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-756qv webserver-deployment-84855cf797- deployment-7381 /api/v1/namespaces/deployment-7381/pods/webserver-deployment-84855cf797-756qv 367ad6d5-f16b-44af-84fc-52d16c9a3fdb 15918166 0 2020-06-26 00:27:50 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a86fb915-12a2-45b2-a4ba-14a0cc6e683c 0xc00251bc07 0xc00251bc08}] [] [{kube-controller-manager Update v1 2020-06-26 00:27:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a86fb915-12a2-45b2-a4ba-14a0cc6e683c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-26 00:27:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qfkms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qfkms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qfkms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-06-26 00:27:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 00:27:53.531: INFO: Pod "webserver-deployment-84855cf797-bnwph" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-bnwph webserver-deployment-84855cf797- deployment-7381 /api/v1/namespaces/deployment-7381/pods/webserver-deployment-84855cf797-bnwph bece17d5-3fc7-4c87-9b94-9dcce9309bfb 15917961 0 2020-06-26 00:27:32 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a86fb915-12a2-45b2-a4ba-14a0cc6e683c 0xc00251bd97 0xc00251bd98}] [] [{kube-controller-manager Update v1 2020-06-26 00:27:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a86fb915-12a2-45b2-a4ba-14a0cc6e683c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-26 00:27:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.136\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qfkms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qfkms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qfkms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.136,StartTime:2020-06-26 00:27:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-26 00:27:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://fe9070f380d6adbceada960ab11327df6459443f07325344b3d90c5589fc3596,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.136,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 00:27:53.531: INFO: Pod "webserver-deployment-84855cf797-bsw2l" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-bsw2l webserver-deployment-84855cf797- deployment-7381 /api/v1/namespaces/deployment-7381/pods/webserver-deployment-84855cf797-bsw2l 65593347-a8bd-4262-bbf9-adec6bcc2051 15918159 0 2020-06-26 00:27:50 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a86fb915-12a2-45b2-a4ba-14a0cc6e683c 0xc00251bf47 0xc00251bf48}] [] [{kube-controller-manager Update v1 2020-06-26 00:27:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a86fb915-12a2-45b2-a4ba-14a0cc6e683c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-26 00:27:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qfkms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qfkms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qfkms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-06-26 00:27:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 00:27:53.531: INFO: Pod "webserver-deployment-84855cf797-c5tqn" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-c5tqn webserver-deployment-84855cf797- deployment-7381 /api/v1/namespaces/deployment-7381/pods/webserver-deployment-84855cf797-c5tqn b0f05471-9675-4d1d-82df-33a74ba9f134 15918212 0 2020-06-26 00:27:50 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a86fb915-12a2-45b2-a4ba-14a0cc6e683c 0xc0039700d7 0xc0039700d8}] [] [{kube-controller-manager Update v1 2020-06-26 00:27:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a86fb915-12a2-45b2-a4ba-14a0cc6e683c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-26 00:27:52 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qfkms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qfkms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qfkms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-06-26 00:27:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 00:27:53.531: INFO: Pod "webserver-deployment-84855cf797-djfw4" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-djfw4 webserver-deployment-84855cf797- deployment-7381 /api/v1/namespaces/deployment-7381/pods/webserver-deployment-84855cf797-djfw4 7295d170-4ecb-4f51-b53e-c213e13be065 15918200 0 2020-06-26 00:27:50 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a86fb915-12a2-45b2-a4ba-14a0cc6e683c 0xc0039703e7 0xc0039703e8}] [] [{kube-controller-manager Update v1 2020-06-26 00:27:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a86fb915-12a2-45b2-a4ba-14a0cc6e683c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-26 00:27:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qfkms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qfkms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qfkms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-06-26 00:27:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 00:27:53.532: INFO: Pod "webserver-deployment-84855cf797-jnvbz" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-jnvbz webserver-deployment-84855cf797- deployment-7381 /api/v1/namespaces/deployment-7381/pods/webserver-deployment-84855cf797-jnvbz c7b8e721-49f2-48ef-b433-bf60a2ddb9e7 15918129 0 2020-06-26 00:27:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a86fb915-12a2-45b2-a4ba-14a0cc6e683c 0xc003970577 0xc003970578}] [] [{kube-controller-manager Update v1 2020-06-26 00:27:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a86fb915-12a2-45b2-a4ba-14a0cc6e683c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-26 00:27:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qfkms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qfkms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qfkms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-06-26 00:27:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 00:27:53.532: INFO: Pod "webserver-deployment-84855cf797-kpxrc" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-kpxrc webserver-deployment-84855cf797- deployment-7381 /api/v1/namespaces/deployment-7381/pods/webserver-deployment-84855cf797-kpxrc b4cd93a3-b3e3-41f4-b13d-df908461b0da 15917944 0 2020-06-26 00:27:32 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a86fb915-12a2-45b2-a4ba-14a0cc6e683c 0xc003970707 0xc003970708}] [] [{kube-controller-manager Update v1 2020-06-26 00:27:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a86fb915-12a2-45b2-a4ba-14a0cc6e683c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-26 00:27:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.135\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qfkms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qfkms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qfkms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.135,StartTime:2020-06-26 00:27:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-26 00:27:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1b8885059612c6fdc3198c4614ef310809a2d6cd093bb124849b73b982dc4e73,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.135,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 00:27:53.532: INFO: Pod "webserver-deployment-84855cf797-ljltf" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-ljltf webserver-deployment-84855cf797- deployment-7381 /api/v1/namespaces/deployment-7381/pods/webserver-deployment-84855cf797-ljltf 5730c25c-fbb0-4b81-b7ea-6948c3a33595 15918005 0 2020-06-26 00:27:32 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a86fb915-12a2-45b2-a4ba-14a0cc6e683c 0xc0039708c7 0xc0039708c8}] [] [{kube-controller-manager Update v1 2020-06-26 00:27:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a86fb915-12a2-45b2-a4ba-14a0cc6e683c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-26 00:27:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.195\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qfkms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qfkms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qfkms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.195,StartTime:2020-06-26 00:27:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-26 00:27:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3b2e6cb6974b458279a78e41a5a6d3e39892b7bf4129b378c5ec43763bd22cf5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.195,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 00:27:53.532: INFO: Pod "webserver-deployment-84855cf797-phhhl" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-phhhl webserver-deployment-84855cf797- deployment-7381 /api/v1/namespaces/deployment-7381/pods/webserver-deployment-84855cf797-phhhl 61e85c5f-1d61-414f-b4c0-2d4a1647fc6a 15918001 0 2020-06-26 00:27:32 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a86fb915-12a2-45b2-a4ba-14a0cc6e683c 0xc003970a77 0xc003970a78}] [] [{kube-controller-manager Update v1 2020-06-26 00:27:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a86fb915-12a2-45b2-a4ba-14a0cc6e683c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-26 00:27:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.194\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qfkms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qfkms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qfkms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.194,StartTime:2020-06-26 00:27:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-26 00:27:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0a037fdc2c4df4f8447f8a3112410e90e34919d270d517a1bf8077b06449de71,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.194,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 00:27:53.532: INFO: Pod "webserver-deployment-84855cf797-qbbbg" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-qbbbg webserver-deployment-84855cf797- deployment-7381 /api/v1/namespaces/deployment-7381/pods/webserver-deployment-84855cf797-qbbbg 9916b5bd-ffcb-40bb-93ed-62b263f05f20 15918174 0 2020-06-26 00:27:50 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a86fb915-12a2-45b2-a4ba-14a0cc6e683c 0xc003970c27 0xc003970c28}] [] [{kube-controller-manager Update v1 2020-06-26 00:27:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a86fb915-12a2-45b2-a4ba-14a0cc6e683c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-26 00:27:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qfkms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qfkms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qfkms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-06-26 00:27:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 00:27:53.533: INFO: Pod "webserver-deployment-84855cf797-rrxgc" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-rrxgc webserver-deployment-84855cf797- deployment-7381 /api/v1/namespaces/deployment-7381/pods/webserver-deployment-84855cf797-rrxgc e2f43a21-2305-40ef-be8f-c25ea5fe42c0 15917972 0 2020-06-26 00:27:32 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a86fb915-12a2-45b2-a4ba-14a0cc6e683c 0xc003970dc7 0xc003970dc8}] [] [{kube-controller-manager Update v1 2020-06-26 00:27:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a86fb915-12a2-45b2-a4ba-14a0cc6e683c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-26 00:27:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.137\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qfkms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qfkms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qfkms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.137,StartTime:2020-06-26 00:27:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-26 00:27:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://38bc1400bd45e82537e158b6ecc663ba96cc6c92f6c6b827f5655e66af407965,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.137,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 00:27:53.533: INFO: Pod "webserver-deployment-84855cf797-s2cxn" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-s2cxn webserver-deployment-84855cf797- deployment-7381 /api/v1/namespaces/deployment-7381/pods/webserver-deployment-84855cf797-s2cxn 0da742e7-6e3c-4852-be5b-ea15fba49d4c 15918178 0 2020-06-26 00:27:50 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a86fb915-12a2-45b2-a4ba-14a0cc6e683c 0xc003970f77 0xc003970f78}] [] [{kube-controller-manager Update v1 2020-06-26 00:27:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a86fb915-12a2-45b2-a4ba-14a0cc6e683c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-26 00:27:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qfkms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qfkms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qfkms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-06-26 00:27:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 00:27:53.533: INFO: Pod "webserver-deployment-84855cf797-t5hq5" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-t5hq5 webserver-deployment-84855cf797- deployment-7381 /api/v1/namespaces/deployment-7381/pods/webserver-deployment-84855cf797-t5hq5 e7e966ee-9d3a-4ee0-9310-7659613203c5 15918195 0 2020-06-26 00:27:50 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a86fb915-12a2-45b2-a4ba-14a0cc6e683c 0xc003971107 0xc003971108}] [] [{kube-controller-manager Update v1 2020-06-26 00:27:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a86fb915-12a2-45b2-a4ba-14a0cc6e683c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-26 00:27:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qfkms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qfkms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qfkms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-06-26 00:27:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 00:27:53.533: INFO: Pod "webserver-deployment-84855cf797-wc4bg" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-wc4bg webserver-deployment-84855cf797- deployment-7381 /api/v1/namespaces/deployment-7381/pods/webserver-deployment-84855cf797-wc4bg 988bc2b5-1854-49e4-93e3-8e9cbfce0fe9 15917960 0 2020-06-26 00:27:32 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a86fb915-12a2-45b2-a4ba-14a0cc6e683c 0xc003971297 0xc003971298}] [] [{kube-controller-manager Update v1 2020-06-26 00:27:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a86fb915-12a2-45b2-a4ba-14a0cc6e683c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-26 00:27:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.193\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qfkms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qfkms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qfkms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.193,StartTime:2020-06-26 00:27:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-26 00:27:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://22d14762649af6e0d6e195b04a5082f658bd030f5b736e251adc17e00f80c8b4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.193,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 00:27:53.534: INFO: Pod "webserver-deployment-84855cf797-x72rb" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-x72rb webserver-deployment-84855cf797- deployment-7381 /api/v1/namespaces/deployment-7381/pods/webserver-deployment-84855cf797-x72rb 49d64a27-d004-4ff9-abd2-e535e3a4a0ce 15918140 0 2020-06-26 00:27:50 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a86fb915-12a2-45b2-a4ba-14a0cc6e683c 0xc003971447 0xc003971448}] [] [{kube-controller-manager Update v1 2020-06-26 00:27:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a86fb915-12a2-45b2-a4ba-14a0cc6e683c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-26 00:27:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qfkms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qfkms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qfkms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-06-26 00:27:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 00:27:53.534: INFO: Pod "webserver-deployment-84855cf797-xbjz7" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-xbjz7 webserver-deployment-84855cf797- deployment-7381 /api/v1/namespaces/deployment-7381/pods/webserver-deployment-84855cf797-xbjz7 30c4f826-5370-4efc-85c0-bfc948364018 15917936 0 2020-06-26 00:27:32 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a86fb915-12a2-45b2-a4ba-14a0cc6e683c 0xc0039715f7 0xc0039715f8}] [] [{kube-controller-manager Update v1 2020-06-26 00:27:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a86fb915-12a2-45b2-a4ba-14a0cc6e683c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-26 00:27:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.192\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qfkms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qfkms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qfkms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.192,StartTime:2020-06-26 00:27:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-26 00:27:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3d7cfe2239320512aec7a996846dbdbca16b01f22194b9819f9536b4e3ded97a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.192,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 00:27:53.534: INFO: Pod "webserver-deployment-84855cf797-xqrkp" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-xqrkp webserver-deployment-84855cf797- deployment-7381 /api/v1/namespaces/deployment-7381/pods/webserver-deployment-84855cf797-xqrkp b552c384-8eb7-451d-b58b-c7ab21688c9f 15918167 0 2020-06-26 00:27:50 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a86fb915-12a2-45b2-a4ba-14a0cc6e683c 0xc0039717a7 0xc0039717a8}] [] [{kube-controller-manager Update v1 2020-06-26 00:27:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a86fb915-12a2-45b2-a4ba-14a0cc6e683c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-26 00:27:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qfkms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qfkms,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qfkms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:27:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-06-26 00:27:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:27:53.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7381" for this suite. • [SLOW TEST:21.733 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":294,"completed":139,"skipped":2115,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:27:54.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0626 00:28:09.130538 8 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 26 00:28:09.130: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Jun 26 00:28:09.130: INFO: Deleting pod "simpletest-rc-to-be-deleted-5mwm8" in namespace "gc-8304" Jun 26 00:28:11.562: INFO: Deleting pod "simpletest-rc-to-be-deleted-dt4fq" in namespace "gc-8304" Jun 26 00:28:12.716: INFO: Deleting pod "simpletest-rc-to-be-deleted-kn576" in namespace "gc-8304" Jun 26 00:28:12.914: INFO: Deleting pod "simpletest-rc-to-be-deleted-kskpq" in namespace "gc-8304" Jun 26 00:28:14.442: INFO: Deleting pod "simpletest-rc-to-be-deleted-nctst" in namespace "gc-8304" [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:28:15.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8304" for this suite. • [SLOW TEST:20.591 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":294,"completed":140,"skipped":2143,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:28:15.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Jun 26 00:28:15.627: INFO: namespace kubectl-8830 Jun 26 00:28:15.627: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8830' Jun 26 00:28:21.221: INFO: stderr: "" Jun 26 00:28:21.221: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jun 26 00:28:22.237: INFO: Selector matched 1 pods for map[app:agnhost] Jun 26 00:28:22.237: INFO: Found 0 / 1 Jun 26 00:28:23.344: INFO: Selector matched 1 pods for map[app:agnhost] Jun 26 00:28:23.345: INFO: Found 0 / 1 Jun 26 00:28:24.242: INFO: Selector matched 1 pods for map[app:agnhost] Jun 26 00:28:24.242: INFO: Found 0 / 1 Jun 26 00:28:25.295: INFO: Selector matched 1 pods for map[app:agnhost] Jun 26 00:28:25.295: INFO: Found 0 / 1 Jun 26 00:28:26.445: INFO: Selector matched 1 pods for map[app:agnhost] Jun 26 00:28:26.445: INFO: Found 0 / 1 Jun 26 00:28:27.356: INFO: Selector matched 1 pods for map[app:agnhost] Jun 26 00:28:27.356: INFO: Found 1 / 1 Jun 26 00:28:27.356: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 26 00:28:27.360: INFO: Selector matched 1 pods for map[app:agnhost] Jun 26 00:28:27.360: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 26 00:28:27.360: INFO: wait on agnhost-master startup in kubectl-8830 Jun 26 00:28:27.360: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs agnhost-master-lltnt agnhost-master --namespace=kubectl-8830' Jun 26 00:28:27.588: INFO: stderr: "" Jun 26 00:28:27.588: INFO: stdout: "Paused\n" STEP: exposing RC Jun 26 00:28:27.588: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8830' Jun 26 00:28:27.968: INFO: stderr: "" Jun 26 00:28:27.968: INFO: stdout: "service/rm2 exposed\n" Jun 26 00:28:28.116: INFO: Service rm2 in namespace kubectl-8830 found. STEP: exposing service Jun 26 00:28:30.122: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8830' Jun 26 00:28:30.280: INFO: stderr: "" Jun 26 00:28:30.280: INFO: stdout: "service/rm3 exposed\n" Jun 26 00:28:30.487: INFO: Service rm3 in namespace kubectl-8830 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:28:32.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8830" for this suite. • [SLOW TEST:17.346 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1229 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":294,"completed":141,"skipped":2154,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:28:32.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1013.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-1013.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1013.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1013.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-1013.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1013.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 26 00:28:40.768: INFO: DNS probes using dns-1013/dns-test-4053c07b-b294-4e45-beee-b2e9fd6666ae succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:28:40.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1013" for this suite. • [SLOW TEST:8.387 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":294,"completed":142,"skipped":2161,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:28:40.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 26 00:28:42.200: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 26 00:28:44.621: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728122, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728122, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728122, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728122, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 26 00:28:47.697: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 26 00:28:47.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6260-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:28:48.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8963" for this suite. STEP: Destroying namespace "webhook-8963-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.209 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":294,"completed":143,"skipped":2170,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:28:49.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 26 00:28:49.238: INFO: Waiting up to 5m0s for pod "pod-735ed78d-285d-4675-aabe-2375f302b5aa" in namespace "emptydir-7053" to be "Succeeded or Failed" Jun 26 00:28:49.280: INFO: Pod "pod-735ed78d-285d-4675-aabe-2375f302b5aa": Phase="Pending", Reason="", readiness=false. Elapsed: 41.714658ms Jun 26 00:28:51.284: INFO: Pod "pod-735ed78d-285d-4675-aabe-2375f302b5aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046205929s Jun 26 00:28:53.289: INFO: Pod "pod-735ed78d-285d-4675-aabe-2375f302b5aa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050904104s Jun 26 00:28:55.293: INFO: Pod "pod-735ed78d-285d-4675-aabe-2375f302b5aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.054970231s STEP: Saw pod success Jun 26 00:28:55.293: INFO: Pod "pod-735ed78d-285d-4675-aabe-2375f302b5aa" satisfied condition "Succeeded or Failed" Jun 26 00:28:55.296: INFO: Trying to get logs from node latest-worker pod pod-735ed78d-285d-4675-aabe-2375f302b5aa container test-container: STEP: delete the pod Jun 26 00:28:55.317: INFO: Waiting for pod pod-735ed78d-285d-4675-aabe-2375f302b5aa to disappear Jun 26 00:28:55.398: INFO: Pod pod-735ed78d-285d-4675-aabe-2375f302b5aa no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:28:55.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7053" for this suite. • [SLOW TEST:6.306 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":144,"skipped":2176,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:28:55.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jun 26 00:29:00.578: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:29:00.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9833" for this suite. • [SLOW TEST:5.313 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":294,"completed":145,"skipped":2197,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:29:00.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1564 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jun 26 00:29:00.847: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-5389' Jun 26 00:29:00.987: INFO: stderr: "" Jun 26 00:29:00.987: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Jun 26 00:29:06.037: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-5389 -o json' Jun 26 00:29:06.243: INFO: stderr: "" Jun 26 00:29:06.243: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-06-26T00:29:00Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-06-26T00:29:00Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.2.220\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-06-26T00:29:04Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-5389\",\n \"resourceVersion\": \"15919059\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-5389/pods/e2e-test-httpd-pod\",\n \"uid\": \"902f28ea-30d4-4290-8919-294fe86f0fc2\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-vwvkq\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-vwvkq\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-vwvkq\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-26T00:29:00Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-26T00:29:04Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-26T00:29:04Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-26T00:29:00Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://68592c50b2ef839c0a405481500a482a88192b5f651a92c6649f3a9a3bb551be\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-06-26T00:29:04Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.12\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.220\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.220\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-06-26T00:29:00Z\"\n }\n}\n" STEP: replace the image in the pod Jun 26 00:29:06.244: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-5389' Jun 26 00:29:10.120: INFO: stderr: "" Jun 26 00:29:10.120: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1569 Jun 26 00:29:10.123: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5389' Jun 26 00:29:13.832: INFO: stderr: "" Jun 26 00:29:13.832: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:29:13.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5389" for this suite. • [SLOW TEST:13.119 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1560 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":294,"completed":146,"skipped":2223,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:29:13.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 26 00:29:13.925: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:29:14.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-454" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":294,"completed":147,"skipped":2223,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:29:14.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-274.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-274.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 26 00:29:22.790: INFO: DNS probes using dns-274/dns-test-6cc455f4-fd4d-4824-b764-ee86822171a4 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:29:22.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-274" for this suite. • [SLOW TEST:8.351 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":294,"completed":148,"skipped":2239,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:29:22.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jun 26 00:29:31.540: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 26 00:29:31.558: INFO: Pod pod-with-prestop-exec-hook still exists Jun 26 00:29:33.559: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 26 00:29:33.564: INFO: Pod pod-with-prestop-exec-hook still exists Jun 26 00:29:35.559: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 26 00:29:35.563: INFO: Pod pod-with-prestop-exec-hook still exists Jun 26 00:29:37.559: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 26 00:29:37.563: INFO: Pod pod-with-prestop-exec-hook still exists Jun 26 00:29:39.559: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 26 00:29:39.564: INFO: Pod pod-with-prestop-exec-hook still exists Jun 26 00:29:41.559: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 26 00:29:41.563: INFO: Pod pod-with-prestop-exec-hook still exists Jun 26 00:29:43.559: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 26 00:29:43.564: INFO: Pod pod-with-prestop-exec-hook still exists Jun 26 00:29:45.559: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 26 00:29:45.563: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:29:45.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7856" for this suite. • [SLOW TEST:22.632 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":294,"completed":149,"skipped":2260,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:29:45.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-8fae085d-ec45-450a-a1f4-509d3d0bbb33 STEP: Creating a pod to test consume configMaps Jun 26 00:29:45.673: INFO: Waiting up to 5m0s for pod "pod-configmaps-9626c97d-228e-4aa1-9349-14bebdc04d26" in namespace "configmap-7309" to be "Succeeded or Failed" Jun 26 00:29:45.697: INFO: Pod "pod-configmaps-9626c97d-228e-4aa1-9349-14bebdc04d26": Phase="Pending", Reason="", readiness=false. Elapsed: 23.916869ms Jun 26 00:29:47.701: INFO: Pod "pod-configmaps-9626c97d-228e-4aa1-9349-14bebdc04d26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027999219s Jun 26 00:29:49.706: INFO: Pod "pod-configmaps-9626c97d-228e-4aa1-9349-14bebdc04d26": Phase="Running", Reason="", readiness=true. Elapsed: 4.032289569s Jun 26 00:29:51.710: INFO: Pod "pod-configmaps-9626c97d-228e-4aa1-9349-14bebdc04d26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036654329s STEP: Saw pod success Jun 26 00:29:51.710: INFO: Pod "pod-configmaps-9626c97d-228e-4aa1-9349-14bebdc04d26" satisfied condition "Succeeded or Failed" Jun 26 00:29:51.713: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-9626c97d-228e-4aa1-9349-14bebdc04d26 container configmap-volume-test: STEP: delete the pod Jun 26 00:29:51.759: INFO: Waiting for pod pod-configmaps-9626c97d-228e-4aa1-9349-14bebdc04d26 to disappear Jun 26 00:29:51.799: INFO: Pod pod-configmaps-9626c97d-228e-4aa1-9349-14bebdc04d26 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:29:51.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7309" for this suite. • [SLOW TEST:6.228 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":294,"completed":150,"skipped":2264,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:29:51.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 26 00:29:52.195: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 26 00:29:54.204: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728192, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728192, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728192, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728192, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 26 00:29:56.208: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728192, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728192, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728192, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728192, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 26 00:29:59.272: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:30:11.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6164" for this suite. STEP: Destroying namespace "webhook-6164-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:19.749 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":294,"completed":151,"skipped":2278,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:30:11.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 26 00:30:11.637: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jun 26 00:30:11.679: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:30:11.715: INFO: Number of nodes with available pods: 0 Jun 26 00:30:11.715: INFO: Node latest-worker is running more than one daemon pod Jun 26 00:30:12.722: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:30:12.724: INFO: Number of nodes with available pods: 0 Jun 26 00:30:12.724: INFO: Node latest-worker is running more than one daemon pod Jun 26 00:30:13.720: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:30:13.724: INFO: Number of nodes with available pods: 0 Jun 26 00:30:13.724: INFO: Node latest-worker is running more than one daemon pod Jun 26 00:30:14.720: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:30:14.723: INFO: Number of nodes with available pods: 0 Jun 26 00:30:14.723: INFO: Node latest-worker is running more than one daemon pod Jun 26 00:30:15.720: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:30:15.723: INFO: Number of nodes with available pods: 0 Jun 26 00:30:15.723: INFO: Node latest-worker is running more than one daemon pod Jun 26 00:30:16.721: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:30:16.725: INFO: Number of nodes with available pods: 2 Jun 26 00:30:16.725: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jun 26 00:30:16.836: INFO: Wrong image for pod: daemon-set-54kv6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 00:30:16.836: INFO: Wrong image for pod: daemon-set-z7rmz. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 00:30:16.849: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:30:17.858: INFO: Wrong image for pod: daemon-set-54kv6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 00:30:17.858: INFO: Wrong image for pod: daemon-set-z7rmz. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 00:30:17.860: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:30:18.854: INFO: Wrong image for pod: daemon-set-54kv6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 00:30:18.854: INFO: Wrong image for pod: daemon-set-z7rmz. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 00:30:18.858: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:30:19.854: INFO: Wrong image for pod: daemon-set-54kv6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 00:30:19.854: INFO: Wrong image for pod: daemon-set-z7rmz. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 00:30:19.854: INFO: Pod daemon-set-z7rmz is not available Jun 26 00:30:19.875: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:30:20.915: INFO: Wrong image for pod: daemon-set-54kv6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 00:30:20.915: INFO: Wrong image for pod: daemon-set-z7rmz. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 00:30:20.915: INFO: Pod daemon-set-z7rmz is not available Jun 26 00:30:20.922: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:30:21.854: INFO: Wrong image for pod: daemon-set-54kv6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 00:30:21.854: INFO: Pod daemon-set-fv78b is not available Jun 26 00:30:21.858: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:30:22.854: INFO: Wrong image for pod: daemon-set-54kv6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 00:30:22.854: INFO: Pod daemon-set-fv78b is not available Jun 26 00:30:22.859: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:30:23.853: INFO: Wrong image for pod: daemon-set-54kv6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 00:30:23.853: INFO: Pod daemon-set-fv78b is not available Jun 26 00:30:23.857: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:30:24.853: INFO: Wrong image for pod: daemon-set-54kv6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 00:30:24.856: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:30:25.855: INFO: Wrong image for pod: daemon-set-54kv6. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jun 26 00:30:25.855: INFO: Pod daemon-set-54kv6 is not available Jun 26 00:30:25.859: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:30:26.854: INFO: Pod daemon-set-4bxc6 is not available Jun 26 00:30:26.859: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Jun 26 00:30:26.863: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:30:26.867: INFO: Number of nodes with available pods: 1 Jun 26 00:30:26.867: INFO: Node latest-worker2 is running more than one daemon pod Jun 26 00:30:27.871: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:30:27.875: INFO: Number of nodes with available pods: 1 Jun 26 00:30:27.875: INFO: Node latest-worker2 is running more than one daemon pod Jun 26 00:30:28.873: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:30:28.877: INFO: Number of nodes with available pods: 1 Jun 26 00:30:28.877: INFO: Node latest-worker2 is running more than one daemon pod Jun 26 00:30:29.872: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:30:29.876: INFO: Number of nodes with available pods: 2 Jun 26 00:30:29.876: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4228, will wait for the garbage collector to delete the pods Jun 26 00:30:29.948: INFO: Deleting DaemonSet.extensions daemon-set took: 5.945719ms Jun 26 00:30:30.249: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.413947ms Jun 26 00:30:35.352: INFO: Number of nodes with available pods: 0 Jun 26 00:30:35.352: INFO: Number of running nodes: 0, number of available pods: 0 Jun 26 00:30:35.355: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4228/daemonsets","resourceVersion":"15919647"},"items":null} Jun 26 00:30:35.357: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4228/pods","resourceVersion":"15919647"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:30:35.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4228" for this suite. • [SLOW TEST:23.818 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":294,"completed":152,"skipped":2301,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:30:35.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0626 00:31:15.895106 8 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 26 00:31:15.895: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Jun 26 00:31:15.895: INFO: Deleting pod "simpletest.rc-5h99n" in namespace "gc-9034" Jun 26 00:31:15.906: INFO: Deleting pod "simpletest.rc-94hj8" in namespace "gc-9034" Jun 26 00:31:15.939: INFO: Deleting pod "simpletest.rc-ctms4" in namespace "gc-9034" Jun 26 00:31:16.687: INFO: Deleting pod "simpletest.rc-d6ktx" in namespace "gc-9034" Jun 26 00:31:17.238: INFO: Deleting pod "simpletest.rc-g9stf" in namespace "gc-9034" Jun 26 00:31:17.439: INFO: Deleting pod "simpletest.rc-mtp9p" in namespace "gc-9034" Jun 26 00:31:17.628: INFO: Deleting pod "simpletest.rc-mxglg" in namespace "gc-9034" Jun 26 00:31:17.815: INFO: Deleting pod "simpletest.rc-r7fs5" in namespace "gc-9034" Jun 26 00:31:18.044: INFO: Deleting pod "simpletest.rc-w2km8" in namespace "gc-9034" Jun 26 00:31:18.579: INFO: Deleting pod "simpletest.rc-x7nq6" in namespace "gc-9034" [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:31:18.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9034" for this suite. • [SLOW TEST:43.655 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":294,"completed":153,"skipped":2324,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:31:19.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-de8f7059-7d64-409c-9083-d87542e073e5 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:31:19.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1179" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":294,"completed":154,"skipped":2328,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:31:19.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 26 00:31:20.460: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:31:20.488: INFO: Number of nodes with available pods: 0 Jun 26 00:31:20.488: INFO: Node latest-worker is running more than one daemon pod Jun 26 00:31:21.492: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:31:21.495: INFO: Number of nodes with available pods: 0 Jun 26 00:31:21.495: INFO: Node latest-worker is running more than one daemon pod Jun 26 00:31:22.544: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:31:22.548: INFO: Number of nodes with available pods: 0 Jun 26 00:31:22.548: INFO: Node latest-worker is running more than one daemon pod Jun 26 00:31:23.502: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:31:23.507: INFO: Number of nodes with available pods: 0 Jun 26 00:31:23.507: INFO: Node latest-worker is running more than one daemon pod Jun 26 00:31:24.493: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:31:24.514: INFO: Number of nodes with available pods: 2 Jun 26 00:31:24.514: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jun 26 00:31:24.580: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:31:24.587: INFO: Number of nodes with available pods: 1 Jun 26 00:31:24.587: INFO: Node latest-worker is running more than one daemon pod Jun 26 00:31:25.593: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:31:25.597: INFO: Number of nodes with available pods: 1 Jun 26 00:31:25.598: INFO: Node latest-worker is running more than one daemon pod Jun 26 00:31:26.594: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:31:26.598: INFO: Number of nodes with available pods: 1 Jun 26 00:31:26.598: INFO: Node latest-worker is running more than one daemon pod Jun 26 00:31:27.591: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:31:27.595: INFO: Number of nodes with available pods: 1 Jun 26 00:31:27.595: INFO: Node latest-worker is running more than one daemon pod Jun 26 00:31:28.593: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:31:28.597: INFO: Number of nodes with available pods: 1 Jun 26 00:31:28.597: INFO: Node latest-worker is running more than one daemon pod Jun 26 00:31:29.593: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:31:29.598: INFO: Number of nodes with available pods: 1 Jun 26 00:31:29.598: INFO: Node latest-worker is running more than one daemon pod Jun 26 00:31:30.592: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:31:30.596: INFO: Number of nodes with available pods: 1 Jun 26 00:31:30.596: INFO: Node latest-worker is running more than one daemon pod Jun 26 00:31:31.593: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:31:31.598: INFO: Number of nodes with available pods: 1 Jun 26 00:31:31.598: INFO: Node latest-worker is running more than one daemon pod Jun 26 00:31:32.593: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:31:32.597: INFO: Number of nodes with available pods: 1 Jun 26 00:31:32.597: INFO: Node latest-worker is running more than one daemon pod Jun 26 00:31:33.592: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:31:33.596: INFO: Number of nodes with available pods: 1 Jun 26 00:31:33.596: INFO: Node latest-worker is running more than one daemon pod Jun 26 00:31:34.603: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:31:34.607: INFO: Number of nodes with available pods: 1 Jun 26 00:31:34.607: INFO: Node latest-worker is running more than one daemon pod Jun 26 00:31:35.593: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:31:35.596: INFO: Number of nodes with available pods: 1 Jun 26 00:31:35.596: INFO: Node latest-worker is running more than one daemon pod Jun 26 00:31:36.592: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:31:36.596: INFO: Number of nodes with available pods: 1 Jun 26 00:31:36.596: INFO: Node latest-worker is running more than one daemon pod Jun 26 00:31:37.591: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:31:37.595: INFO: Number of nodes with available pods: 1 Jun 26 00:31:37.595: INFO: Node latest-worker is running more than one daemon pod Jun 26 00:31:38.593: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 26 00:31:38.598: INFO: Number of nodes with available pods: 2 Jun 26 00:31:38.598: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8099, will wait for the garbage collector to delete the pods Jun 26 00:31:38.661: INFO: Deleting DaemonSet.extensions daemon-set took: 6.858172ms Jun 26 00:31:38.961: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.261738ms Jun 26 00:31:45.264: INFO: Number of nodes with available pods: 0 Jun 26 00:31:45.264: INFO: Number of running nodes: 0, number of available pods: 0 Jun 26 00:31:45.268: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8099/daemonsets","resourceVersion":"15920157"},"items":null} Jun 26 00:31:45.270: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8099/pods","resourceVersion":"15920157"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:31:45.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8099" for this suite. • [SLOW TEST:25.669 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":294,"completed":155,"skipped":2332,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:31:45.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 26 00:31:45.384: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 26 00:31:45.415: INFO: Waiting for terminating namespaces to be deleted... Jun 26 00:31:45.418: INFO: Logging pods the apiserver thinks is on node latest-worker before test Jun 26 00:31:45.423: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) Jun 26 00:31:45.423: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 Jun 26 00:31:45.423: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) Jun 26 00:31:45.423: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 Jun 26 00:31:45.423: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jun 26 00:31:45.423: INFO: Container kindnet-cni ready: true, restart count 5 Jun 26 00:31:45.423: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jun 26 00:31:45.423: INFO: Container kube-proxy ready: true, restart count 0 Jun 26 00:31:45.423: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Jun 26 00:31:45.428: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) Jun 26 00:31:45.428: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 Jun 26 00:31:45.428: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) Jun 26 00:31:45.428: INFO: Container terminate-cmd-rpa ready: true, restart count 2 Jun 26 00:31:45.428: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jun 26 00:31:45.428: INFO: Container kindnet-cni ready: true, restart count 5 Jun 26 00:31:45.428: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jun 26 00:31:45.428: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-30fc236d-649d-4d4d-832f-ba1a8dba253b 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-30fc236d-649d-4d4d-832f-ba1a8dba253b off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-30fc236d-649d-4d4d-832f-ba1a8dba253b [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:31:53.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3361" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.430 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":294,"completed":156,"skipped":2349,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:31:53.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 26 00:31:53.763: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:31:57.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5847" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":294,"completed":157,"skipped":2367,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:31:57.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jun 26 00:31:58.785: INFO: Pod name wrapped-volume-race-8547be64-bcfc-4cb3-8bce-8a3d3d56b26b: Found 0 pods out of 5 Jun 26 00:32:03.794: INFO: Pod name wrapped-volume-race-8547be64-bcfc-4cb3-8bce-8a3d3d56b26b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-8547be64-bcfc-4cb3-8bce-8a3d3d56b26b in namespace emptydir-wrapper-3470, will wait for the garbage collector to delete the pods Jun 26 00:32:18.266: INFO: Deleting ReplicationController wrapped-volume-race-8547be64-bcfc-4cb3-8bce-8a3d3d56b26b took: 6.248678ms Jun 26 00:32:18.566: INFO: Terminating ReplicationController wrapped-volume-race-8547be64-bcfc-4cb3-8bce-8a3d3d56b26b pods took: 300.254985ms STEP: Creating RC which spawns configmap-volume pods Jun 26 00:32:35.509: INFO: Pod name wrapped-volume-race-755f35dd-6de1-4767-87c1-a20d38ac9e44: Found 0 pods out of 5 Jun 26 00:32:40.518: INFO: Pod name wrapped-volume-race-755f35dd-6de1-4767-87c1-a20d38ac9e44: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-755f35dd-6de1-4767-87c1-a20d38ac9e44 in namespace emptydir-wrapper-3470, will wait for the garbage collector to delete the pods Jun 26 00:32:54.624: INFO: Deleting ReplicationController wrapped-volume-race-755f35dd-6de1-4767-87c1-a20d38ac9e44 took: 6.270946ms Jun 26 00:32:55.025: INFO: Terminating ReplicationController wrapped-volume-race-755f35dd-6de1-4767-87c1-a20d38ac9e44 pods took: 400.340451ms STEP: Creating RC which spawns configmap-volume pods Jun 26 00:33:05.582: INFO: Pod name wrapped-volume-race-18c719f6-41a3-457a-8593-a4930d786be4: Found 0 pods out of 5 Jun 26 00:33:10.591: INFO: Pod name wrapped-volume-race-18c719f6-41a3-457a-8593-a4930d786be4: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-18c719f6-41a3-457a-8593-a4930d786be4 in namespace emptydir-wrapper-3470, will wait for the garbage collector to delete the pods Jun 26 00:33:26.690: INFO: Deleting ReplicationController wrapped-volume-race-18c719f6-41a3-457a-8593-a4930d786be4 took: 16.094288ms Jun 26 00:33:26.990: INFO: Terminating ReplicationController wrapped-volume-race-18c719f6-41a3-457a-8593-a4930d786be4 pods took: 300.279115ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:33:36.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3470" for this suite. • [SLOW TEST:98.345 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":294,"completed":158,"skipped":2382,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:33:36.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-9e3626e4-442a-42b5-b95d-907727322a4f STEP: Creating secret with name secret-projected-all-test-volume-2c3ed7f5-90b9-4a56-b30e-c81b642d8ef0 STEP: Creating a pod to test Check all projections for projected volume plugin Jun 26 00:33:36.406: INFO: Waiting up to 5m0s for pod "projected-volume-b11a0252-81d4-4fce-99b0-3939d74076a5" in namespace "projected-8019" to be "Succeeded or Failed" Jun 26 00:33:36.409: INFO: Pod "projected-volume-b11a0252-81d4-4fce-99b0-3939d74076a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.986601ms Jun 26 00:33:38.635: INFO: Pod "projected-volume-b11a0252-81d4-4fce-99b0-3939d74076a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22893139s Jun 26 00:33:40.639: INFO: Pod "projected-volume-b11a0252-81d4-4fce-99b0-3939d74076a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.232841571s STEP: Saw pod success Jun 26 00:33:40.639: INFO: Pod "projected-volume-b11a0252-81d4-4fce-99b0-3939d74076a5" satisfied condition "Succeeded or Failed" Jun 26 00:33:40.642: INFO: Trying to get logs from node latest-worker2 pod projected-volume-b11a0252-81d4-4fce-99b0-3939d74076a5 container projected-all-volume-test: STEP: delete the pod Jun 26 00:33:40.700: INFO: Waiting for pod projected-volume-b11a0252-81d4-4fce-99b0-3939d74076a5 to disappear Jun 26 00:33:40.710: INFO: Pod projected-volume-b11a0252-81d4-4fce-99b0-3939d74076a5 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:33:40.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8019" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":294,"completed":159,"skipped":2475,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:33:40.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 26 00:33:40.795: INFO: Waiting up to 5m0s for pod "pod-47936507-6219-4792-953f-826d654c69bd" in namespace "emptydir-1220" to be "Succeeded or Failed" Jun 26 00:33:41.087: INFO: Pod "pod-47936507-6219-4792-953f-826d654c69bd": Phase="Pending", Reason="", readiness=false. Elapsed: 292.651384ms Jun 26 00:33:43.144: INFO: Pod "pod-47936507-6219-4792-953f-826d654c69bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.348991198s Jun 26 00:33:45.153: INFO: Pod "pod-47936507-6219-4792-953f-826d654c69bd": Phase="Running", Reason="", readiness=true. Elapsed: 4.358753831s Jun 26 00:33:47.158: INFO: Pod "pod-47936507-6219-4792-953f-826d654c69bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.363287014s STEP: Saw pod success Jun 26 00:33:47.158: INFO: Pod "pod-47936507-6219-4792-953f-826d654c69bd" satisfied condition "Succeeded or Failed" Jun 26 00:33:47.161: INFO: Trying to get logs from node latest-worker2 pod pod-47936507-6219-4792-953f-826d654c69bd container test-container: STEP: delete the pod Jun 26 00:33:47.194: INFO: Waiting for pod pod-47936507-6219-4792-953f-826d654c69bd to disappear Jun 26 00:33:47.207: INFO: Pod pod-47936507-6219-4792-953f-826d654c69bd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:33:47.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1220" for this suite. • [SLOW TEST:6.538 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":160,"skipped":2480,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:33:47.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-a7c5b3fc-20f4-4cd4-bc6d-cc36dda0b1ba STEP: Creating a pod to test consume secrets Jun 26 00:33:47.400: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-59b540f2-175a-4218-84c2-ee0aef1a61a2" in namespace "projected-6802" to be "Succeeded or Failed" Jun 26 00:33:47.403: INFO: Pod "pod-projected-secrets-59b540f2-175a-4218-84c2-ee0aef1a61a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.665567ms Jun 26 00:33:49.408: INFO: Pod "pod-projected-secrets-59b540f2-175a-4218-84c2-ee0aef1a61a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007816553s Jun 26 00:33:51.413: INFO: Pod "pod-projected-secrets-59b540f2-175a-4218-84c2-ee0aef1a61a2": Phase="Running", Reason="", readiness=true. Elapsed: 4.012356275s Jun 26 00:33:53.418: INFO: Pod "pod-projected-secrets-59b540f2-175a-4218-84c2-ee0aef1a61a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017451882s STEP: Saw pod success Jun 26 00:33:53.418: INFO: Pod "pod-projected-secrets-59b540f2-175a-4218-84c2-ee0aef1a61a2" satisfied condition "Succeeded or Failed" Jun 26 00:33:53.421: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-59b540f2-175a-4218-84c2-ee0aef1a61a2 container secret-volume-test: STEP: delete the pod Jun 26 00:33:53.471: INFO: Waiting for pod pod-projected-secrets-59b540f2-175a-4218-84c2-ee0aef1a61a2 to disappear Jun 26 00:33:53.478: INFO: Pod pod-projected-secrets-59b540f2-175a-4218-84c2-ee0aef1a61a2 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:33:53.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6802" for this suite. • [SLOW TEST:6.229 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":294,"completed":161,"skipped":2503,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:33:53.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-4296 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-4296 STEP: Creating statefulset with conflicting port in namespace statefulset-4296 STEP: Waiting until pod test-pod will start running in namespace statefulset-4296 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4296 Jun 26 00:33:59.810: INFO: Observed stateful pod in namespace: statefulset-4296, name: ss-0, uid: 8d9f2680-ee0b-4e2b-97aa-9960fad82ca2, status phase: Pending. Waiting for statefulset controller to delete. Jun 26 00:33:59.849: INFO: Observed stateful pod in namespace: statefulset-4296, name: ss-0, uid: 8d9f2680-ee0b-4e2b-97aa-9960fad82ca2, status phase: Failed. Waiting for statefulset controller to delete. Jun 26 00:33:59.858: INFO: Observed stateful pod in namespace: statefulset-4296, name: ss-0, uid: 8d9f2680-ee0b-4e2b-97aa-9960fad82ca2, status phase: Failed. Waiting for statefulset controller to delete. Jun 26 00:33:59.891: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4296 STEP: Removing pod with conflicting port in namespace statefulset-4296 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4296 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jun 26 00:34:05.954: INFO: Deleting all statefulset in ns statefulset-4296 Jun 26 00:34:05.957: INFO: Scaling statefulset ss to 0 Jun 26 00:34:15.979: INFO: Waiting for statefulset status.replicas updated to 0 Jun 26 00:34:15.982: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:34:15.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4296" for this suite. • [SLOW TEST:22.518 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":294,"completed":162,"skipped":2521,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:34:16.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command Jun 26 00:34:16.061: INFO: Waiting up to 5m0s for pod "client-containers-65d57e98-5329-4358-abad-08c7f4b1c1bc" in namespace "containers-9303" to be "Succeeded or Failed" Jun 26 00:34:16.107: INFO: Pod "client-containers-65d57e98-5329-4358-abad-08c7f4b1c1bc": Phase="Pending", Reason="", readiness=false. Elapsed: 46.119378ms Jun 26 00:34:18.111: INFO: Pod "client-containers-65d57e98-5329-4358-abad-08c7f4b1c1bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05045844s Jun 26 00:34:20.115: INFO: Pod "client-containers-65d57e98-5329-4358-abad-08c7f4b1c1bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05424482s STEP: Saw pod success Jun 26 00:34:20.115: INFO: Pod "client-containers-65d57e98-5329-4358-abad-08c7f4b1c1bc" satisfied condition "Succeeded or Failed" Jun 26 00:34:20.118: INFO: Trying to get logs from node latest-worker2 pod client-containers-65d57e98-5329-4358-abad-08c7f4b1c1bc container test-container: STEP: delete the pod Jun 26 00:34:20.139: INFO: Waiting for pod client-containers-65d57e98-5329-4358-abad-08c7f4b1c1bc to disappear Jun 26 00:34:20.147: INFO: Pod client-containers-65d57e98-5329-4358-abad-08c7f4b1c1bc no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:34:20.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9303" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":294,"completed":163,"skipped":2525,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:34:20.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 26 00:34:20.260: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0c94a85f-2cf8-4f3f-888e-58254c3e1353" in namespace "downward-api-766" to be "Succeeded or Failed" Jun 26 00:34:20.264: INFO: Pod "downwardapi-volume-0c94a85f-2cf8-4f3f-888e-58254c3e1353": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028257ms Jun 26 00:34:22.358: INFO: Pod "downwardapi-volume-0c94a85f-2cf8-4f3f-888e-58254c3e1353": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098306173s Jun 26 00:34:24.363: INFO: Pod "downwardapi-volume-0c94a85f-2cf8-4f3f-888e-58254c3e1353": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.102991677s STEP: Saw pod success Jun 26 00:34:24.363: INFO: Pod "downwardapi-volume-0c94a85f-2cf8-4f3f-888e-58254c3e1353" satisfied condition "Succeeded or Failed" Jun 26 00:34:24.366: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-0c94a85f-2cf8-4f3f-888e-58254c3e1353 container client-container: STEP: delete the pod Jun 26 00:34:24.403: INFO: Waiting for pod downwardapi-volume-0c94a85f-2cf8-4f3f-888e-58254c3e1353 to disappear Jun 26 00:34:24.412: INFO: Pod downwardapi-volume-0c94a85f-2cf8-4f3f-888e-58254c3e1353 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:34:24.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-766" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":294,"completed":164,"skipped":2533,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:34:24.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Jun 26 00:34:31.029: INFO: Successfully updated pod "adopt-release-4dzsc" STEP: Checking that the Job readopts the Pod Jun 26 00:34:31.029: INFO: Waiting up to 15m0s for pod "adopt-release-4dzsc" in namespace "job-2537" to be "adopted" Jun 26 00:34:31.033: INFO: Pod "adopt-release-4dzsc": Phase="Running", Reason="", readiness=true. Elapsed: 3.542268ms Jun 26 00:34:33.037: INFO: Pod "adopt-release-4dzsc": Phase="Running", Reason="", readiness=true. Elapsed: 2.007968619s Jun 26 00:34:33.037: INFO: Pod "adopt-release-4dzsc" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Jun 26 00:34:33.549: INFO: Successfully updated pod "adopt-release-4dzsc" STEP: Checking that the Job releases the Pod Jun 26 00:34:33.549: INFO: Waiting up to 15m0s for pod "adopt-release-4dzsc" in namespace "job-2537" to be "released" Jun 26 00:34:33.578: INFO: Pod "adopt-release-4dzsc": Phase="Running", Reason="", readiness=true. Elapsed: 29.102191ms Jun 26 00:34:35.640: INFO: Pod "adopt-release-4dzsc": Phase="Running", Reason="", readiness=true. Elapsed: 2.090583375s Jun 26 00:34:35.640: INFO: Pod "adopt-release-4dzsc" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:34:35.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2537" for this suite. • [SLOW TEST:11.228 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":294,"completed":165,"skipped":2537,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:34:35.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5253.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5253.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5253.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5253.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5253.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5253.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5253.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5253.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5253.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5253.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5253.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 101.236.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.236.101_udp@PTR;check="$$(dig +tcp +noall +answer +search 101.236.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.236.101_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5253.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5253.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5253.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5253.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5253.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5253.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5253.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5253.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5253.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5253.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5253.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 101.236.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.236.101_udp@PTR;check="$$(dig +tcp +noall +answer +search 101.236.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.236.101_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 26 00:34:42.254: INFO: Unable to read wheezy_udp@dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:34:42.256: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:34:42.259: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:34:42.261: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:34:42.277: INFO: Unable to read jessie_udp@dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:34:42.279: INFO: Unable to read jessie_tcp@dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:34:42.282: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:34:42.284: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:34:42.300: INFO: Lookups using dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee failed for: [wheezy_udp@dns-test-service.dns-5253.svc.cluster.local wheezy_tcp@dns-test-service.dns-5253.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local jessie_udp@dns-test-service.dns-5253.svc.cluster.local jessie_tcp@dns-test-service.dns-5253.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local] Jun 26 00:34:47.306: INFO: Unable to read wheezy_udp@dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:34:47.310: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:34:47.313: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:34:47.316: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:34:47.359: INFO: Unable to read jessie_udp@dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:34:47.363: INFO: Unable to read jessie_tcp@dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:34:47.365: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:34:47.368: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:34:47.386: INFO: Lookups using dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee failed for: [wheezy_udp@dns-test-service.dns-5253.svc.cluster.local wheezy_tcp@dns-test-service.dns-5253.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local jessie_udp@dns-test-service.dns-5253.svc.cluster.local jessie_tcp@dns-test-service.dns-5253.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local] Jun 26 00:34:52.306: INFO: Unable to read wheezy_udp@dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:34:52.310: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:34:52.313: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:34:52.316: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:34:52.341: INFO: Unable to read jessie_udp@dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:34:52.344: INFO: Unable to read jessie_tcp@dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:34:52.347: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:34:52.350: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:34:52.369: INFO: Lookups using dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee failed for: [wheezy_udp@dns-test-service.dns-5253.svc.cluster.local wheezy_tcp@dns-test-service.dns-5253.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local jessie_udp@dns-test-service.dns-5253.svc.cluster.local jessie_tcp@dns-test-service.dns-5253.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local] Jun 26 00:34:57.306: INFO: Unable to read wheezy_udp@dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:34:57.310: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:34:57.314: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:34:57.317: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:34:57.341: INFO: Unable to read jessie_udp@dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:34:57.343: INFO: Unable to read jessie_tcp@dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:34:57.346: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:34:57.349: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:34:57.366: INFO: Lookups using dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee failed for: [wheezy_udp@dns-test-service.dns-5253.svc.cluster.local wheezy_tcp@dns-test-service.dns-5253.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local jessie_udp@dns-test-service.dns-5253.svc.cluster.local jessie_tcp@dns-test-service.dns-5253.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local] Jun 26 00:35:02.306: INFO: Unable to read wheezy_udp@dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:35:02.311: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:35:02.315: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:35:02.318: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:35:02.342: INFO: Unable to read jessie_udp@dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:35:02.345: INFO: Unable to read jessie_tcp@dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:35:02.348: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:35:02.350: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:35:02.369: INFO: Lookups using dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee failed for: [wheezy_udp@dns-test-service.dns-5253.svc.cluster.local wheezy_tcp@dns-test-service.dns-5253.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local jessie_udp@dns-test-service.dns-5253.svc.cluster.local jessie_tcp@dns-test-service.dns-5253.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local] Jun 26 00:35:07.307: INFO: Unable to read wheezy_udp@dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:35:07.310: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:35:07.314: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:35:07.318: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:35:07.341: INFO: Unable to read jessie_udp@dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:35:07.343: INFO: Unable to read jessie_tcp@dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:35:07.346: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:35:07.348: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local from pod dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee: the server could not find the requested resource (get pods dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee) Jun 26 00:35:07.366: INFO: Lookups using dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee failed for: [wheezy_udp@dns-test-service.dns-5253.svc.cluster.local wheezy_tcp@dns-test-service.dns-5253.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local jessie_udp@dns-test-service.dns-5253.svc.cluster.local jessie_tcp@dns-test-service.dns-5253.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5253.svc.cluster.local] Jun 26 00:35:12.454: INFO: DNS probes using dns-5253/dns-test-91b828b5-67e5-4350-a9c6-c1eed124b0ee succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:35:13.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5253" for this suite. • [SLOW TEST:38.301 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":294,"completed":166,"skipped":2567,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:35:13.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jun 26 00:35:15.035: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jun 26 00:35:17.046: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728515, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728515, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728515, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728514, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 26 00:35:19.051: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728515, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728515, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728515, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728514, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 26 00:35:22.079: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 26 00:35:22.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:35:23.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-9106" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:9.437 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":294,"completed":167,"skipped":2585,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:35:23.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 26 00:35:24.510: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 26 00:35:26.624: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728524, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728524, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728524, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728524, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 26 00:35:29.690: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Jun 26 00:35:33.780: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config attach --namespace=webhook-8662 to-be-attached-pod -i -c=container1' Jun 26 00:35:36.609: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:35:36.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8662" for this suite. STEP: Destroying namespace "webhook-8662-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.352 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":294,"completed":168,"skipped":2610,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:35:36.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 26 00:35:36.856: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:35:38.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3050" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":294,"completed":169,"skipped":2614,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} S ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:35:38.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:36:14.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6948" for this suite. • [SLOW TEST:36.859 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":294,"completed":170,"skipped":2615,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:36:14.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 26 00:36:14.977: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-9c306ce2-aa22-4f54-a342-f9e56bf46ae7" in namespace "security-context-test-2293" to be "Succeeded or Failed" Jun 26 00:36:15.018: INFO: Pod "alpine-nnp-false-9c306ce2-aa22-4f54-a342-f9e56bf46ae7": Phase="Pending", Reason="", readiness=false. Elapsed: 40.828456ms Jun 26 00:36:17.022: INFO: Pod "alpine-nnp-false-9c306ce2-aa22-4f54-a342-f9e56bf46ae7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044586081s Jun 26 00:36:19.027: INFO: Pod "alpine-nnp-false-9c306ce2-aa22-4f54-a342-f9e56bf46ae7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049192026s Jun 26 00:36:19.027: INFO: Pod "alpine-nnp-false-9c306ce2-aa22-4f54-a342-f9e56bf46ae7" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:36:19.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2293" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":171,"skipped":2618,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:36:19.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-6e10ae36-b10a-44b6-b480-0f6430a35c2b STEP: Creating a pod to test consume secrets Jun 26 00:36:19.106: INFO: Waiting up to 5m0s for pod "pod-secrets-76ffaf99-9e5e-423c-8366-9a90c3c24f61" in namespace "secrets-6995" to be "Succeeded or Failed" Jun 26 00:36:19.122: INFO: Pod "pod-secrets-76ffaf99-9e5e-423c-8366-9a90c3c24f61": Phase="Pending", Reason="", readiness=false. Elapsed: 15.714385ms Jun 26 00:36:21.126: INFO: Pod "pod-secrets-76ffaf99-9e5e-423c-8366-9a90c3c24f61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019788828s Jun 26 00:36:23.130: INFO: Pod "pod-secrets-76ffaf99-9e5e-423c-8366-9a90c3c24f61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023893828s STEP: Saw pod success Jun 26 00:36:23.130: INFO: Pod "pod-secrets-76ffaf99-9e5e-423c-8366-9a90c3c24f61" satisfied condition "Succeeded or Failed" Jun 26 00:36:23.133: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-76ffaf99-9e5e-423c-8366-9a90c3c24f61 container secret-volume-test: STEP: delete the pod Jun 26 00:36:23.222: INFO: Waiting for pod pod-secrets-76ffaf99-9e5e-423c-8366-9a90c3c24f61 to disappear Jun 26 00:36:23.224: INFO: Pod pod-secrets-76ffaf99-9e5e-423c-8366-9a90c3c24f61 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:36:23.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6995" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":294,"completed":172,"skipped":2624,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:36:23.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 26 00:36:23.487: INFO: Waiting up to 5m0s for pod "pod-51a9a0be-c37d-4292-94fe-89a5eeb27cb9" in namespace "emptydir-398" to be "Succeeded or Failed" Jun 26 00:36:23.587: INFO: Pod "pod-51a9a0be-c37d-4292-94fe-89a5eeb27cb9": Phase="Pending", Reason="", readiness=false. Elapsed: 100.437521ms Jun 26 00:36:25.591: INFO: Pod "pod-51a9a0be-c37d-4292-94fe-89a5eeb27cb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104176355s Jun 26 00:36:27.594: INFO: Pod "pod-51a9a0be-c37d-4292-94fe-89a5eeb27cb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.107147028s STEP: Saw pod success Jun 26 00:36:27.594: INFO: Pod "pod-51a9a0be-c37d-4292-94fe-89a5eeb27cb9" satisfied condition "Succeeded or Failed" Jun 26 00:36:27.596: INFO: Trying to get logs from node latest-worker2 pod pod-51a9a0be-c37d-4292-94fe-89a5eeb27cb9 container test-container: STEP: delete the pod Jun 26 00:36:27.672: INFO: Waiting for pod pod-51a9a0be-c37d-4292-94fe-89a5eeb27cb9 to disappear Jun 26 00:36:27.676: INFO: Pod pod-51a9a0be-c37d-4292-94fe-89a5eeb27cb9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:36:27.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-398" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":173,"skipped":2646,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} S ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:36:27.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jun 26 00:36:37.864: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3444 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 00:36:37.864: INFO: >>> kubeConfig: /root/.kube/config I0626 00:36:37.899175 8 log.go:172] (0xc00293fef0) (0xc0020e1ae0) Create stream I0626 00:36:37.899210 8 log.go:172] (0xc00293fef0) (0xc0020e1ae0) Stream added, broadcasting: 1 I0626 00:36:37.901767 8 log.go:172] (0xc00293fef0) Reply frame received for 1 I0626 00:36:37.901815 8 log.go:172] (0xc00293fef0) (0xc0027b1b80) Create stream I0626 00:36:37.901831 8 log.go:172] (0xc00293fef0) (0xc0027b1b80) Stream added, broadcasting: 3 I0626 00:36:37.903030 8 log.go:172] (0xc00293fef0) Reply frame received for 3 I0626 00:36:37.903087 8 log.go:172] (0xc00293fef0) (0xc001d18c80) Create stream I0626 00:36:37.903103 8 log.go:172] (0xc00293fef0) (0xc001d18c80) Stream added, broadcasting: 5 I0626 00:36:37.904217 8 log.go:172] (0xc00293fef0) Reply frame received for 5 I0626 00:36:37.968031 8 log.go:172] (0xc00293fef0) Data frame received for 5 I0626 00:36:37.968070 8 log.go:172] (0xc001d18c80) (5) Data frame handling I0626 00:36:37.968098 8 log.go:172] (0xc00293fef0) Data frame received for 3 I0626 00:36:37.968113 8 log.go:172] (0xc0027b1b80) (3) Data frame handling I0626 00:36:37.968130 8 log.go:172] (0xc0027b1b80) (3) Data frame sent I0626 00:36:37.968146 8 log.go:172] (0xc00293fef0) Data frame received for 3 I0626 00:36:37.968156 8 log.go:172] (0xc0027b1b80) (3) Data frame handling I0626 00:36:37.970044 8 log.go:172] (0xc00293fef0) Data frame received for 1 I0626 00:36:37.970062 8 log.go:172] (0xc0020e1ae0) (1) Data frame handling I0626 00:36:37.970075 8 log.go:172] (0xc0020e1ae0) (1) Data frame sent I0626 00:36:37.970127 8 log.go:172] (0xc00293fef0) (0xc0020e1ae0) Stream removed, broadcasting: 1 I0626 00:36:37.970171 8 log.go:172] (0xc00293fef0) Go away received I0626 00:36:37.970253 8 log.go:172] (0xc00293fef0) (0xc0020e1ae0) Stream removed, broadcasting: 1 I0626 00:36:37.970267 8 log.go:172] (0xc00293fef0) (0xc0027b1b80) Stream removed, broadcasting: 3 I0626 00:36:37.970273 8 log.go:172] (0xc00293fef0) (0xc001d18c80) Stream removed, broadcasting: 5 Jun 26 00:36:37.970: INFO: Exec stderr: "" Jun 26 00:36:37.970: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3444 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 00:36:37.970: INFO: >>> kubeConfig: /root/.kube/config I0626 00:36:38.006893 8 log.go:172] (0xc00116eb00) (0xc001d19180) Create stream I0626 00:36:38.006925 8 log.go:172] (0xc00116eb00) (0xc001d19180) Stream added, broadcasting: 1 I0626 00:36:38.009444 8 log.go:172] (0xc00116eb00) Reply frame received for 1 I0626 00:36:38.009504 8 log.go:172] (0xc00116eb00) (0xc0027b1c20) Create stream I0626 00:36:38.009530 8 log.go:172] (0xc00116eb00) (0xc0027b1c20) Stream added, broadcasting: 3 I0626 00:36:38.010630 8 log.go:172] (0xc00116eb00) Reply frame received for 3 I0626 00:36:38.010686 8 log.go:172] (0xc00116eb00) (0xc0027b1cc0) Create stream I0626 00:36:38.010703 8 log.go:172] (0xc00116eb00) (0xc0027b1cc0) Stream added, broadcasting: 5 I0626 00:36:38.011869 8 log.go:172] (0xc00116eb00) Reply frame received for 5 I0626 00:36:38.085616 8 log.go:172] (0xc00116eb00) Data frame received for 3 I0626 00:36:38.085656 8 log.go:172] (0xc0027b1c20) (3) Data frame handling I0626 00:36:38.085680 8 log.go:172] (0xc0027b1c20) (3) Data frame sent I0626 00:36:38.085694 8 log.go:172] (0xc00116eb00) Data frame received for 3 I0626 00:36:38.085714 8 log.go:172] (0xc0027b1c20) (3) Data frame handling I0626 00:36:38.085750 8 log.go:172] (0xc00116eb00) Data frame received for 5 I0626 00:36:38.085785 8 log.go:172] (0xc0027b1cc0) (5) Data frame handling I0626 00:36:38.087796 8 log.go:172] (0xc00116eb00) Data frame received for 1 I0626 00:36:38.087830 8 log.go:172] (0xc001d19180) (1) Data frame handling I0626 00:36:38.087869 8 log.go:172] (0xc001d19180) (1) Data frame sent I0626 00:36:38.087895 8 log.go:172] (0xc00116eb00) (0xc001d19180) Stream removed, broadcasting: 1 I0626 00:36:38.087957 8 log.go:172] (0xc00116eb00) Go away received I0626 00:36:38.088169 8 log.go:172] (0xc00116eb00) (0xc001d19180) Stream removed, broadcasting: 1 I0626 00:36:38.088224 8 log.go:172] (0xc00116eb00) (0xc0027b1c20) Stream removed, broadcasting: 3 I0626 00:36:38.088251 8 log.go:172] (0xc00116eb00) (0xc0027b1cc0) Stream removed, broadcasting: 5 Jun 26 00:36:38.088: INFO: Exec stderr: "" Jun 26 00:36:38.088: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3444 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 00:36:38.088: INFO: >>> kubeConfig: /root/.kube/config I0626 00:36:38.120243 8 log.go:172] (0xc0025f26e0) (0xc0020e1cc0) Create stream I0626 00:36:38.120280 8 log.go:172] (0xc0025f26e0) (0xc0020e1cc0) Stream added, broadcasting: 1 I0626 00:36:38.123177 8 log.go:172] (0xc0025f26e0) Reply frame received for 1 I0626 00:36:38.123220 8 log.go:172] (0xc0025f26e0) (0xc002be92c0) Create stream I0626 00:36:38.123234 8 log.go:172] (0xc0025f26e0) (0xc002be92c0) Stream added, broadcasting: 3 I0626 00:36:38.124334 8 log.go:172] (0xc0025f26e0) Reply frame received for 3 I0626 00:36:38.124376 8 log.go:172] (0xc0025f26e0) (0xc001d192c0) Create stream I0626 00:36:38.124384 8 log.go:172] (0xc0025f26e0) (0xc001d192c0) Stream added, broadcasting: 5 I0626 00:36:38.125789 8 log.go:172] (0xc0025f26e0) Reply frame received for 5 I0626 00:36:38.194522 8 log.go:172] (0xc0025f26e0) Data frame received for 3 I0626 00:36:38.194554 8 log.go:172] (0xc002be92c0) (3) Data frame handling I0626 00:36:38.194578 8 log.go:172] (0xc0025f26e0) Data frame received for 5 I0626 00:36:38.194609 8 log.go:172] (0xc001d192c0) (5) Data frame handling I0626 00:36:38.194637 8 log.go:172] (0xc002be92c0) (3) Data frame sent I0626 00:36:38.194653 8 log.go:172] (0xc0025f26e0) Data frame received for 3 I0626 00:36:38.194675 8 log.go:172] (0xc002be92c0) (3) Data frame handling I0626 00:36:38.196197 8 log.go:172] (0xc0025f26e0) Data frame received for 1 I0626 00:36:38.196227 8 log.go:172] (0xc0020e1cc0) (1) Data frame handling I0626 00:36:38.196246 8 log.go:172] (0xc0020e1cc0) (1) Data frame sent I0626 00:36:38.196310 8 log.go:172] (0xc0025f26e0) (0xc0020e1cc0) Stream removed, broadcasting: 1 I0626 00:36:38.196337 8 log.go:172] (0xc0025f26e0) Go away received I0626 00:36:38.196428 8 log.go:172] (0xc0025f26e0) (0xc0020e1cc0) Stream removed, broadcasting: 1 I0626 00:36:38.196455 8 log.go:172] (0xc0025f26e0) (0xc002be92c0) Stream removed, broadcasting: 3 I0626 00:36:38.196476 8 log.go:172] (0xc0025f26e0) (0xc001d192c0) Stream removed, broadcasting: 5 Jun 26 00:36:38.196: INFO: Exec stderr: "" Jun 26 00:36:38.196: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3444 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 00:36:38.196: INFO: >>> kubeConfig: /root/.kube/config I0626 00:36:38.233612 8 log.go:172] (0xc0025f2d10) (0xc0020e1f40) Create stream I0626 00:36:38.233635 8 log.go:172] (0xc0025f2d10) (0xc0020e1f40) Stream added, broadcasting: 1 I0626 00:36:38.235473 8 log.go:172] (0xc0025f2d10) Reply frame received for 1 I0626 00:36:38.235498 8 log.go:172] (0xc0025f2d10) (0xc0006ec820) Create stream I0626 00:36:38.235505 8 log.go:172] (0xc0025f2d10) (0xc0006ec820) Stream added, broadcasting: 3 I0626 00:36:38.236391 8 log.go:172] (0xc0025f2d10) Reply frame received for 3 I0626 00:36:38.236423 8 log.go:172] (0xc0025f2d10) (0xc0025f0000) Create stream I0626 00:36:38.236435 8 log.go:172] (0xc0025f2d10) (0xc0025f0000) Stream added, broadcasting: 5 I0626 00:36:38.237570 8 log.go:172] (0xc0025f2d10) Reply frame received for 5 I0626 00:36:38.311209 8 log.go:172] (0xc0025f2d10) Data frame received for 5 I0626 00:36:38.311250 8 log.go:172] (0xc0025f0000) (5) Data frame handling I0626 00:36:38.311283 8 log.go:172] (0xc0025f2d10) Data frame received for 3 I0626 00:36:38.311296 8 log.go:172] (0xc0006ec820) (3) Data frame handling I0626 00:36:38.311313 8 log.go:172] (0xc0006ec820) (3) Data frame sent I0626 00:36:38.311325 8 log.go:172] (0xc0025f2d10) Data frame received for 3 I0626 00:36:38.311336 8 log.go:172] (0xc0006ec820) (3) Data frame handling I0626 00:36:38.312766 8 log.go:172] (0xc0025f2d10) Data frame received for 1 I0626 00:36:38.312786 8 log.go:172] (0xc0020e1f40) (1) Data frame handling I0626 00:36:38.312797 8 log.go:172] (0xc0020e1f40) (1) Data frame sent I0626 00:36:38.312814 8 log.go:172] (0xc0025f2d10) (0xc0020e1f40) Stream removed, broadcasting: 1 I0626 00:36:38.312829 8 log.go:172] (0xc0025f2d10) Go away received I0626 00:36:38.312873 8 log.go:172] (0xc0025f2d10) (0xc0020e1f40) Stream removed, broadcasting: 1 I0626 00:36:38.312919 8 log.go:172] (0xc0025f2d10) (0xc0006ec820) Stream removed, broadcasting: 3 I0626 00:36:38.312940 8 log.go:172] (0xc0025f2d10) (0xc0025f0000) Stream removed, broadcasting: 5 Jun 26 00:36:38.312: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jun 26 00:36:38.312: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3444 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 00:36:38.313: INFO: >>> kubeConfig: /root/.kube/config I0626 00:36:38.344504 8 log.go:172] (0xc00116f290) (0xc001d19540) Create stream I0626 00:36:38.344539 8 log.go:172] (0xc00116f290) (0xc001d19540) Stream added, broadcasting: 1 I0626 00:36:38.359976 8 log.go:172] (0xc00116f290) Reply frame received for 1 I0626 00:36:38.360061 8 log.go:172] (0xc00116f290) (0xc0027b1d60) Create stream I0626 00:36:38.360096 8 log.go:172] (0xc00116f290) (0xc0027b1d60) Stream added, broadcasting: 3 I0626 00:36:38.361698 8 log.go:172] (0xc00116f290) Reply frame received for 3 I0626 00:36:38.361750 8 log.go:172] (0xc00116f290) (0xc0006ecfa0) Create stream I0626 00:36:38.361766 8 log.go:172] (0xc00116f290) (0xc0006ecfa0) Stream added, broadcasting: 5 I0626 00:36:38.362771 8 log.go:172] (0xc00116f290) Reply frame received for 5 I0626 00:36:38.409447 8 log.go:172] (0xc00116f290) Data frame received for 3 I0626 00:36:38.409485 8 log.go:172] (0xc0027b1d60) (3) Data frame handling I0626 00:36:38.409497 8 log.go:172] (0xc0027b1d60) (3) Data frame sent I0626 00:36:38.409506 8 log.go:172] (0xc00116f290) Data frame received for 3 I0626 00:36:38.409513 8 log.go:172] (0xc0027b1d60) (3) Data frame handling I0626 00:36:38.409547 8 log.go:172] (0xc00116f290) Data frame received for 5 I0626 00:36:38.409571 8 log.go:172] (0xc0006ecfa0) (5) Data frame handling I0626 00:36:38.410832 8 log.go:172] (0xc00116f290) Data frame received for 1 I0626 00:36:38.410859 8 log.go:172] (0xc001d19540) (1) Data frame handling I0626 00:36:38.410883 8 log.go:172] (0xc001d19540) (1) Data frame sent I0626 00:36:38.410896 8 log.go:172] (0xc00116f290) (0xc001d19540) Stream removed, broadcasting: 1 I0626 00:36:38.410911 8 log.go:172] (0xc00116f290) Go away received I0626 00:36:38.411004 8 log.go:172] (0xc00116f290) (0xc001d19540) Stream removed, broadcasting: 1 I0626 00:36:38.411039 8 log.go:172] (0xc00116f290) (0xc0027b1d60) Stream removed, broadcasting: 3 I0626 00:36:38.411056 8 log.go:172] (0xc00116f290) (0xc0006ecfa0) Stream removed, broadcasting: 5 Jun 26 00:36:38.411: INFO: Exec stderr: "" Jun 26 00:36:38.411: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3444 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 00:36:38.411: INFO: >>> kubeConfig: /root/.kube/config I0626 00:36:38.442106 8 log.go:172] (0xc00116f8c0) (0xc001d19860) Create stream I0626 00:36:38.442144 8 log.go:172] (0xc00116f8c0) (0xc001d19860) Stream added, broadcasting: 1 I0626 00:36:38.444030 8 log.go:172] (0xc00116f8c0) Reply frame received for 1 I0626 00:36:38.444070 8 log.go:172] (0xc00116f8c0) (0xc002be9360) Create stream I0626 00:36:38.444085 8 log.go:172] (0xc00116f8c0) (0xc002be9360) Stream added, broadcasting: 3 I0626 00:36:38.445522 8 log.go:172] (0xc00116f8c0) Reply frame received for 3 I0626 00:36:38.445601 8 log.go:172] (0xc00116f8c0) (0xc0027b1f40) Create stream I0626 00:36:38.445628 8 log.go:172] (0xc00116f8c0) (0xc0027b1f40) Stream added, broadcasting: 5 I0626 00:36:38.446900 8 log.go:172] (0xc00116f8c0) Reply frame received for 5 I0626 00:36:38.514514 8 log.go:172] (0xc00116f8c0) Data frame received for 3 I0626 00:36:38.514543 8 log.go:172] (0xc002be9360) (3) Data frame handling I0626 00:36:38.514563 8 log.go:172] (0xc002be9360) (3) Data frame sent I0626 00:36:38.514594 8 log.go:172] (0xc00116f8c0) Data frame received for 3 I0626 00:36:38.514604 8 log.go:172] (0xc002be9360) (3) Data frame handling I0626 00:36:38.514634 8 log.go:172] (0xc00116f8c0) Data frame received for 5 I0626 00:36:38.514661 8 log.go:172] (0xc0027b1f40) (5) Data frame handling I0626 00:36:38.516155 8 log.go:172] (0xc00116f8c0) Data frame received for 1 I0626 00:36:38.516173 8 log.go:172] (0xc001d19860) (1) Data frame handling I0626 00:36:38.516187 8 log.go:172] (0xc001d19860) (1) Data frame sent I0626 00:36:38.516198 8 log.go:172] (0xc00116f8c0) (0xc001d19860) Stream removed, broadcasting: 1 I0626 00:36:38.516246 8 log.go:172] (0xc00116f8c0) Go away received I0626 00:36:38.516297 8 log.go:172] (0xc00116f8c0) (0xc001d19860) Stream removed, broadcasting: 1 I0626 00:36:38.516322 8 log.go:172] (0xc00116f8c0) (0xc002be9360) Stream removed, broadcasting: 3 I0626 00:36:38.516337 8 log.go:172] (0xc00116f8c0) (0xc0027b1f40) Stream removed, broadcasting: 5 Jun 26 00:36:38.516: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jun 26 00:36:38.516: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3444 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 00:36:38.516: INFO: >>> kubeConfig: /root/.kube/config I0626 00:36:38.552673 8 log.go:172] (0xc001bec2c0) (0xc0006edc20) Create stream I0626 00:36:38.552711 8 log.go:172] (0xc001bec2c0) (0xc0006edc20) Stream added, broadcasting: 1 I0626 00:36:38.554894 8 log.go:172] (0xc001bec2c0) Reply frame received for 1 I0626 00:36:38.554936 8 log.go:172] (0xc001bec2c0) (0xc0006edd60) Create stream I0626 00:36:38.554958 8 log.go:172] (0xc001bec2c0) (0xc0006edd60) Stream added, broadcasting: 3 I0626 00:36:38.555978 8 log.go:172] (0xc001bec2c0) Reply frame received for 3 I0626 00:36:38.556009 8 log.go:172] (0xc001bec2c0) (0xc0025f00a0) Create stream I0626 00:36:38.556028 8 log.go:172] (0xc001bec2c0) (0xc0025f00a0) Stream added, broadcasting: 5 I0626 00:36:38.556828 8 log.go:172] (0xc001bec2c0) Reply frame received for 5 I0626 00:36:38.622272 8 log.go:172] (0xc001bec2c0) Data frame received for 3 I0626 00:36:38.622312 8 log.go:172] (0xc0006edd60) (3) Data frame handling I0626 00:36:38.622358 8 log.go:172] (0xc0006edd60) (3) Data frame sent I0626 00:36:38.622395 8 log.go:172] (0xc001bec2c0) Data frame received for 3 I0626 00:36:38.622428 8 log.go:172] (0xc0006edd60) (3) Data frame handling I0626 00:36:38.622469 8 log.go:172] (0xc001bec2c0) Data frame received for 5 I0626 00:36:38.622498 8 log.go:172] (0xc0025f00a0) (5) Data frame handling I0626 00:36:38.624334 8 log.go:172] (0xc001bec2c0) Data frame received for 1 I0626 00:36:38.624366 8 log.go:172] (0xc0006edc20) (1) Data frame handling I0626 00:36:38.624409 8 log.go:172] (0xc0006edc20) (1) Data frame sent I0626 00:36:38.624438 8 log.go:172] (0xc001bec2c0) (0xc0006edc20) Stream removed, broadcasting: 1 I0626 00:36:38.624466 8 log.go:172] (0xc001bec2c0) Go away received I0626 00:36:38.624570 8 log.go:172] (0xc001bec2c0) (0xc0006edc20) Stream removed, broadcasting: 1 I0626 00:36:38.624594 8 log.go:172] (0xc001bec2c0) (0xc0006edd60) Stream removed, broadcasting: 3 I0626 00:36:38.624605 8 log.go:172] (0xc001bec2c0) (0xc0025f00a0) Stream removed, broadcasting: 5 Jun 26 00:36:38.624: INFO: Exec stderr: "" Jun 26 00:36:38.624: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3444 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 00:36:38.624: INFO: >>> kubeConfig: /root/.kube/config I0626 00:36:38.663047 8 log.go:172] (0xc004e33290) (0xc002be95e0) Create stream I0626 00:36:38.663083 8 log.go:172] (0xc004e33290) (0xc002be95e0) Stream added, broadcasting: 1 I0626 00:36:38.672082 8 log.go:172] (0xc004e33290) Reply frame received for 1 I0626 00:36:38.672140 8 log.go:172] (0xc004e33290) (0xc002be9680) Create stream I0626 00:36:38.672167 8 log.go:172] (0xc004e33290) (0xc002be9680) Stream added, broadcasting: 3 I0626 00:36:38.673694 8 log.go:172] (0xc004e33290) Reply frame received for 3 I0626 00:36:38.673754 8 log.go:172] (0xc004e33290) (0xc001d18000) Create stream I0626 00:36:38.673782 8 log.go:172] (0xc004e33290) (0xc001d18000) Stream added, broadcasting: 5 I0626 00:36:38.674674 8 log.go:172] (0xc004e33290) Reply frame received for 5 I0626 00:36:38.724548 8 log.go:172] (0xc004e33290) Data frame received for 3 I0626 00:36:38.724571 8 log.go:172] (0xc002be9680) (3) Data frame handling I0626 00:36:38.724587 8 log.go:172] (0xc002be9680) (3) Data frame sent I0626 00:36:38.724748 8 log.go:172] (0xc004e33290) Data frame received for 3 I0626 00:36:38.724768 8 log.go:172] (0xc002be9680) (3) Data frame handling I0626 00:36:38.724787 8 log.go:172] (0xc004e33290) Data frame received for 5 I0626 00:36:38.724797 8 log.go:172] (0xc001d18000) (5) Data frame handling I0626 00:36:38.726811 8 log.go:172] (0xc004e33290) Data frame received for 1 I0626 00:36:38.726823 8 log.go:172] (0xc002be95e0) (1) Data frame handling I0626 00:36:38.726829 8 log.go:172] (0xc002be95e0) (1) Data frame sent I0626 00:36:38.726944 8 log.go:172] (0xc004e33290) (0xc002be95e0) Stream removed, broadcasting: 1 I0626 00:36:38.727051 8 log.go:172] (0xc004e33290) (0xc002be95e0) Stream removed, broadcasting: 1 I0626 00:36:38.727074 8 log.go:172] (0xc004e33290) (0xc002be9680) Stream removed, broadcasting: 3 I0626 00:36:38.727123 8 log.go:172] (0xc004e33290) Go away received I0626 00:36:38.727178 8 log.go:172] (0xc004e33290) (0xc001d18000) Stream removed, broadcasting: 5 Jun 26 00:36:38.727: INFO: Exec stderr: "" Jun 26 00:36:38.727: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3444 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 00:36:38.727: INFO: >>> kubeConfig: /root/.kube/config I0626 00:36:38.757421 8 log.go:172] (0xc002dbfb80) (0xc002e76320) Create stream I0626 00:36:38.757452 8 log.go:172] (0xc002dbfb80) (0xc002e76320) Stream added, broadcasting: 1 I0626 00:36:38.759050 8 log.go:172] (0xc002dbfb80) Reply frame received for 1 I0626 00:36:38.759101 8 log.go:172] (0xc002dbfb80) (0xc0020e00a0) Create stream I0626 00:36:38.759114 8 log.go:172] (0xc002dbfb80) (0xc0020e00a0) Stream added, broadcasting: 3 I0626 00:36:38.760223 8 log.go:172] (0xc002dbfb80) Reply frame received for 3 I0626 00:36:38.760254 8 log.go:172] (0xc002dbfb80) (0xc0020e0140) Create stream I0626 00:36:38.760260 8 log.go:172] (0xc002dbfb80) (0xc0020e0140) Stream added, broadcasting: 5 I0626 00:36:38.761408 8 log.go:172] (0xc002dbfb80) Reply frame received for 5 I0626 00:36:38.810665 8 log.go:172] (0xc002dbfb80) Data frame received for 3 I0626 00:36:38.810691 8 log.go:172] (0xc0020e00a0) (3) Data frame handling I0626 00:36:38.810704 8 log.go:172] (0xc0020e00a0) (3) Data frame sent I0626 00:36:38.810716 8 log.go:172] (0xc002dbfb80) Data frame received for 3 I0626 00:36:38.810732 8 log.go:172] (0xc0020e00a0) (3) Data frame handling I0626 00:36:38.810828 8 log.go:172] (0xc002dbfb80) Data frame received for 5 I0626 00:36:38.810873 8 log.go:172] (0xc0020e0140) (5) Data frame handling I0626 00:36:38.812157 8 log.go:172] (0xc002dbfb80) Data frame received for 1 I0626 00:36:38.812191 8 log.go:172] (0xc002e76320) (1) Data frame handling I0626 00:36:38.812206 8 log.go:172] (0xc002e76320) (1) Data frame sent I0626 00:36:38.812229 8 log.go:172] (0xc002dbfb80) (0xc002e76320) Stream removed, broadcasting: 1 I0626 00:36:38.812295 8 log.go:172] (0xc002dbfb80) Go away received I0626 00:36:38.812319 8 log.go:172] (0xc002dbfb80) (0xc002e76320) Stream removed, broadcasting: 1 I0626 00:36:38.812338 8 log.go:172] (0xc002dbfb80) (0xc0020e00a0) Stream removed, broadcasting: 3 I0626 00:36:38.812344 8 log.go:172] (0xc002dbfb80) (0xc0020e0140) Stream removed, broadcasting: 5 Jun 26 00:36:38.812: INFO: Exec stderr: "" Jun 26 00:36:38.812: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3444 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 00:36:38.812: INFO: >>> kubeConfig: /root/.kube/config I0626 00:36:38.842964 8 log.go:172] (0xc00116e4d0) (0xc0025ba280) Create stream I0626 00:36:38.842991 8 log.go:172] (0xc00116e4d0) (0xc0025ba280) Stream added, broadcasting: 1 I0626 00:36:38.844524 8 log.go:172] (0xc00116e4d0) Reply frame received for 1 I0626 00:36:38.844568 8 log.go:172] (0xc00116e4d0) (0xc0025ba5a0) Create stream I0626 00:36:38.844580 8 log.go:172] (0xc00116e4d0) (0xc0025ba5a0) Stream added, broadcasting: 3 I0626 00:36:38.845602 8 log.go:172] (0xc00116e4d0) Reply frame received for 3 I0626 00:36:38.845707 8 log.go:172] (0xc00116e4d0) (0xc0025ba6e0) Create stream I0626 00:36:38.845720 8 log.go:172] (0xc00116e4d0) (0xc0025ba6e0) Stream added, broadcasting: 5 I0626 00:36:38.846489 8 log.go:172] (0xc00116e4d0) Reply frame received for 5 I0626 00:36:38.899456 8 log.go:172] (0xc00116e4d0) Data frame received for 5 I0626 00:36:38.899483 8 log.go:172] (0xc0025ba6e0) (5) Data frame handling I0626 00:36:38.899543 8 log.go:172] (0xc00116e4d0) Data frame received for 3 I0626 00:36:38.899600 8 log.go:172] (0xc0025ba5a0) (3) Data frame handling I0626 00:36:38.899629 8 log.go:172] (0xc0025ba5a0) (3) Data frame sent I0626 00:36:38.899652 8 log.go:172] (0xc00116e4d0) Data frame received for 3 I0626 00:36:38.899668 8 log.go:172] (0xc0025ba5a0) (3) Data frame handling I0626 00:36:38.900824 8 log.go:172] (0xc00116e4d0) Data frame received for 1 I0626 00:36:38.900847 8 log.go:172] (0xc0025ba280) (1) Data frame handling I0626 00:36:38.900865 8 log.go:172] (0xc0025ba280) (1) Data frame sent I0626 00:36:38.900881 8 log.go:172] (0xc00116e4d0) (0xc0025ba280) Stream removed, broadcasting: 1 I0626 00:36:38.900898 8 log.go:172] (0xc00116e4d0) Go away received I0626 00:36:38.901059 8 log.go:172] (0xc00116e4d0) (0xc0025ba280) Stream removed, broadcasting: 1 I0626 00:36:38.901088 8 log.go:172] (0xc00116e4d0) (0xc0025ba5a0) Stream removed, broadcasting: 3 I0626 00:36:38.901271 8 log.go:172] (0xc00116e4d0) (0xc0025ba6e0) Stream removed, broadcasting: 5 Jun 26 00:36:38.901: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:36:38.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-3444" for this suite. • [SLOW TEST:11.277 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":174,"skipped":2647,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:36:38.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-ce882050-3042-4a8b-839d-2b4781fdd1dc STEP: Creating a pod to test consume secrets Jun 26 00:36:39.023: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0e033152-5e37-4f27-888c-ecb952634730" in namespace "projected-4393" to be "Succeeded or Failed" Jun 26 00:36:39.043: INFO: Pod "pod-projected-secrets-0e033152-5e37-4f27-888c-ecb952634730": Phase="Pending", Reason="", readiness=false. Elapsed: 19.937469ms Jun 26 00:36:41.048: INFO: Pod "pod-projected-secrets-0e033152-5e37-4f27-888c-ecb952634730": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025181089s Jun 26 00:36:43.052: INFO: Pod "pod-projected-secrets-0e033152-5e37-4f27-888c-ecb952634730": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029337365s STEP: Saw pod success Jun 26 00:36:43.053: INFO: Pod "pod-projected-secrets-0e033152-5e37-4f27-888c-ecb952634730" satisfied condition "Succeeded or Failed" Jun 26 00:36:43.055: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-0e033152-5e37-4f27-888c-ecb952634730 container projected-secret-volume-test: STEP: delete the pod Jun 26 00:36:43.402: INFO: Waiting for pod pod-projected-secrets-0e033152-5e37-4f27-888c-ecb952634730 to disappear Jun 26 00:36:43.406: INFO: Pod pod-projected-secrets-0e033152-5e37-4f27-888c-ecb952634730 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:36:43.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4393" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":294,"completed":175,"skipped":2696,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:36:43.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:36:43.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7057" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":294,"completed":176,"skipped":2698,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:36:43.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:36:47.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-4999" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":294,"completed":177,"skipped":2705,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:36:47.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 26 00:36:52.432: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:36:52.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7659" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":294,"completed":178,"skipped":2739,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:36:52.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-6fc962c1-14b7-4109-ab70-90d39473dad4 STEP: Creating a pod to test consume configMaps Jun 26 00:36:52.610: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-09ecc584-7612-465d-a24f-757f72b0d01c" in namespace "projected-7455" to be "Succeeded or Failed" Jun 26 00:36:52.629: INFO: Pod "pod-projected-configmaps-09ecc584-7612-465d-a24f-757f72b0d01c": Phase="Pending", Reason="", readiness=false. Elapsed: 18.238167ms Jun 26 00:36:54.695: INFO: Pod "pod-projected-configmaps-09ecc584-7612-465d-a24f-757f72b0d01c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084633845s Jun 26 00:36:56.700: INFO: Pod "pod-projected-configmaps-09ecc584-7612-465d-a24f-757f72b0d01c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.089546422s STEP: Saw pod success Jun 26 00:36:56.700: INFO: Pod "pod-projected-configmaps-09ecc584-7612-465d-a24f-757f72b0d01c" satisfied condition "Succeeded or Failed" Jun 26 00:36:56.703: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-09ecc584-7612-465d-a24f-757f72b0d01c container projected-configmap-volume-test: STEP: delete the pod Jun 26 00:36:56.743: INFO: Waiting for pod pod-projected-configmaps-09ecc584-7612-465d-a24f-757f72b0d01c to disappear Jun 26 00:36:56.755: INFO: Pod pod-projected-configmaps-09ecc584-7612-465d-a24f-757f72b0d01c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:36:56.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7455" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":179,"skipped":2807,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:36:56.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 26 00:36:56.839: INFO: Waiting up to 5m0s for pod "downwardapi-volume-71cca6a3-3c66-4037-ad93-3504e2ef5786" in namespace "projected-1647" to be "Succeeded or Failed" Jun 26 00:36:56.862: INFO: Pod "downwardapi-volume-71cca6a3-3c66-4037-ad93-3504e2ef5786": Phase="Pending", Reason="", readiness=false. Elapsed: 23.082725ms Jun 26 00:36:58.867: INFO: Pod "downwardapi-volume-71cca6a3-3c66-4037-ad93-3504e2ef5786": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027811398s Jun 26 00:37:00.871: INFO: Pod "downwardapi-volume-71cca6a3-3c66-4037-ad93-3504e2ef5786": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032210547s STEP: Saw pod success Jun 26 00:37:00.871: INFO: Pod "downwardapi-volume-71cca6a3-3c66-4037-ad93-3504e2ef5786" satisfied condition "Succeeded or Failed" Jun 26 00:37:00.911: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-71cca6a3-3c66-4037-ad93-3504e2ef5786 container client-container: STEP: delete the pod Jun 26 00:37:00.954: INFO: Waiting for pod downwardapi-volume-71cca6a3-3c66-4037-ad93-3504e2ef5786 to disappear Jun 26 00:37:00.968: INFO: Pod downwardapi-volume-71cca6a3-3c66-4037-ad93-3504e2ef5786 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:37:00.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1647" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":294,"completed":180,"skipped":2822,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:37:00.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 26 00:37:01.437: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Jun 26 00:37:03.449: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728621, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728621, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728621, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728621, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 26 00:37:06.552: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:37:07.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-24" for this suite. STEP: Destroying namespace "webhook-24-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.907 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":294,"completed":181,"skipped":2838,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:37:07.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3011.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3011.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3011.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3011.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 26 00:37:14.036: INFO: DNS probes using dns-test-ef9fb03f-bf70-46a4-b768-41059aa1192f succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3011.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3011.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3011.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3011.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 26 00:37:22.160: INFO: File wheezy_udp@dns-test-service-3.dns-3011.svc.cluster.local from pod dns-3011/dns-test-c704f5c7-1b22-46bb-84fc-5c493c6c9006 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 26 00:37:22.163: INFO: File jessie_udp@dns-test-service-3.dns-3011.svc.cluster.local from pod dns-3011/dns-test-c704f5c7-1b22-46bb-84fc-5c493c6c9006 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 26 00:37:22.163: INFO: Lookups using dns-3011/dns-test-c704f5c7-1b22-46bb-84fc-5c493c6c9006 failed for: [wheezy_udp@dns-test-service-3.dns-3011.svc.cluster.local jessie_udp@dns-test-service-3.dns-3011.svc.cluster.local] Jun 26 00:37:27.168: INFO: File wheezy_udp@dns-test-service-3.dns-3011.svc.cluster.local from pod dns-3011/dns-test-c704f5c7-1b22-46bb-84fc-5c493c6c9006 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 26 00:37:27.172: INFO: File jessie_udp@dns-test-service-3.dns-3011.svc.cluster.local from pod dns-3011/dns-test-c704f5c7-1b22-46bb-84fc-5c493c6c9006 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 26 00:37:27.172: INFO: Lookups using dns-3011/dns-test-c704f5c7-1b22-46bb-84fc-5c493c6c9006 failed for: [wheezy_udp@dns-test-service-3.dns-3011.svc.cluster.local jessie_udp@dns-test-service-3.dns-3011.svc.cluster.local] Jun 26 00:37:32.168: INFO: File wheezy_udp@dns-test-service-3.dns-3011.svc.cluster.local from pod dns-3011/dns-test-c704f5c7-1b22-46bb-84fc-5c493c6c9006 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 26 00:37:32.171: INFO: File jessie_udp@dns-test-service-3.dns-3011.svc.cluster.local from pod dns-3011/dns-test-c704f5c7-1b22-46bb-84fc-5c493c6c9006 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 26 00:37:32.171: INFO: Lookups using dns-3011/dns-test-c704f5c7-1b22-46bb-84fc-5c493c6c9006 failed for: [wheezy_udp@dns-test-service-3.dns-3011.svc.cluster.local jessie_udp@dns-test-service-3.dns-3011.svc.cluster.local] Jun 26 00:37:37.182: INFO: File wheezy_udp@dns-test-service-3.dns-3011.svc.cluster.local from pod dns-3011/dns-test-c704f5c7-1b22-46bb-84fc-5c493c6c9006 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 26 00:37:37.186: INFO: File jessie_udp@dns-test-service-3.dns-3011.svc.cluster.local from pod dns-3011/dns-test-c704f5c7-1b22-46bb-84fc-5c493c6c9006 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 26 00:37:37.186: INFO: Lookups using dns-3011/dns-test-c704f5c7-1b22-46bb-84fc-5c493c6c9006 failed for: [wheezy_udp@dns-test-service-3.dns-3011.svc.cluster.local jessie_udp@dns-test-service-3.dns-3011.svc.cluster.local] Jun 26 00:37:42.169: INFO: File wheezy_udp@dns-test-service-3.dns-3011.svc.cluster.local from pod dns-3011/dns-test-c704f5c7-1b22-46bb-84fc-5c493c6c9006 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 26 00:37:42.173: INFO: File jessie_udp@dns-test-service-3.dns-3011.svc.cluster.local from pod dns-3011/dns-test-c704f5c7-1b22-46bb-84fc-5c493c6c9006 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 26 00:37:42.173: INFO: Lookups using dns-3011/dns-test-c704f5c7-1b22-46bb-84fc-5c493c6c9006 failed for: [wheezy_udp@dns-test-service-3.dns-3011.svc.cluster.local jessie_udp@dns-test-service-3.dns-3011.svc.cluster.local] Jun 26 00:37:47.174: INFO: DNS probes using dns-test-c704f5c7-1b22-46bb-84fc-5c493c6c9006 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3011.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3011.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3011.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3011.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 26 00:37:55.782: INFO: DNS probes using dns-test-ad41f6ec-4a9a-4b9c-a02b-0ecab2a401a1 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:37:55.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3011" for this suite. • [SLOW TEST:48.016 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":294,"completed":182,"skipped":2841,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:37:55.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Jun 26 00:37:56.035: INFO: Created pod &Pod{ObjectMeta:{dns-1888 dns-1888 /api/v1/namespaces/dns-1888/pods/dns-1888 01944b78-2554-4d44-a005-91229bd84870 15923253 0 2020-06-26 00:37:56 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-06-26 00:37:56 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jl282,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jl282,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jl282,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 00:37:56.396: INFO: The status of Pod dns-1888 is Pending, waiting for it to be Running (with Ready = true) Jun 26 00:37:58.400: INFO: The status of Pod dns-1888 is Pending, waiting for it to be Running (with Ready = true) Jun 26 00:38:00.401: INFO: The status of Pod dns-1888 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Jun 26 00:38:00.401: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-1888 PodName:dns-1888 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 00:38:00.401: INFO: >>> kubeConfig: /root/.kube/config I0626 00:38:00.437763 8 log.go:172] (0xc004e32f20) (0xc0025f1860) Create stream I0626 00:38:00.437798 8 log.go:172] (0xc004e32f20) (0xc0025f1860) Stream added, broadcasting: 1 I0626 00:38:00.439724 8 log.go:172] (0xc004e32f20) Reply frame received for 1 I0626 00:38:00.439759 8 log.go:172] (0xc004e32f20) (0xc002e765a0) Create stream I0626 00:38:00.439768 8 log.go:172] (0xc004e32f20) (0xc002e765a0) Stream added, broadcasting: 3 I0626 00:38:00.440699 8 log.go:172] (0xc004e32f20) Reply frame received for 3 I0626 00:38:00.440740 8 log.go:172] (0xc004e32f20) (0xc002e766e0) Create stream I0626 00:38:00.440754 8 log.go:172] (0xc004e32f20) (0xc002e766e0) Stream added, broadcasting: 5 I0626 00:38:00.442130 8 log.go:172] (0xc004e32f20) Reply frame received for 5 I0626 00:38:00.536934 8 log.go:172] (0xc004e32f20) Data frame received for 3 I0626 00:38:00.536988 8 log.go:172] (0xc002e765a0) (3) Data frame handling I0626 00:38:00.537014 8 log.go:172] (0xc002e765a0) (3) Data frame sent I0626 00:38:00.538724 8 log.go:172] (0xc004e32f20) Data frame received for 5 I0626 00:38:00.538762 8 log.go:172] (0xc002e766e0) (5) Data frame handling I0626 00:38:00.538805 8 log.go:172] (0xc004e32f20) Data frame received for 3 I0626 00:38:00.538822 8 log.go:172] (0xc002e765a0) (3) Data frame handling I0626 00:38:00.539993 8 log.go:172] (0xc004e32f20) Data frame received for 1 I0626 00:38:00.540033 8 log.go:172] (0xc0025f1860) (1) Data frame handling I0626 00:38:00.540075 8 log.go:172] (0xc0025f1860) (1) Data frame sent I0626 00:38:00.540098 8 log.go:172] (0xc004e32f20) (0xc0025f1860) Stream removed, broadcasting: 1 I0626 00:38:00.540117 8 log.go:172] (0xc004e32f20) Go away received I0626 00:38:00.540249 8 log.go:172] (0xc004e32f20) (0xc0025f1860) Stream removed, broadcasting: 1 I0626 00:38:00.540279 8 log.go:172] (0xc004e32f20) (0xc002e765a0) Stream removed, broadcasting: 3 I0626 00:38:00.540299 8 log.go:172] (0xc004e32f20) (0xc002e766e0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Jun 26 00:38:00.540: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-1888 PodName:dns-1888 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 00:38:00.540: INFO: >>> kubeConfig: /root/.kube/config I0626 00:38:00.605497 8 log.go:172] (0xc001bec630) (0xc0027b1360) Create stream I0626 00:38:00.605537 8 log.go:172] (0xc001bec630) (0xc0027b1360) Stream added, broadcasting: 1 I0626 00:38:00.607570 8 log.go:172] (0xc001bec630) Reply frame received for 1 I0626 00:38:00.607611 8 log.go:172] (0xc001bec630) (0xc0027b15e0) Create stream I0626 00:38:00.607630 8 log.go:172] (0xc001bec630) (0xc0027b15e0) Stream added, broadcasting: 3 I0626 00:38:00.608504 8 log.go:172] (0xc001bec630) Reply frame received for 3 I0626 00:38:00.608544 8 log.go:172] (0xc001bec630) (0xc002be9360) Create stream I0626 00:38:00.608561 8 log.go:172] (0xc001bec630) (0xc002be9360) Stream added, broadcasting: 5 I0626 00:38:00.609937 8 log.go:172] (0xc001bec630) Reply frame received for 5 I0626 00:38:00.699060 8 log.go:172] (0xc001bec630) Data frame received for 3 I0626 00:38:00.699098 8 log.go:172] (0xc0027b15e0) (3) Data frame handling I0626 00:38:00.699124 8 log.go:172] (0xc0027b15e0) (3) Data frame sent I0626 00:38:00.700577 8 log.go:172] (0xc001bec630) Data frame received for 5 I0626 00:38:00.700603 8 log.go:172] (0xc002be9360) (5) Data frame handling I0626 00:38:00.700627 8 log.go:172] (0xc001bec630) Data frame received for 3 I0626 00:38:00.700641 8 log.go:172] (0xc0027b15e0) (3) Data frame handling I0626 00:38:00.702726 8 log.go:172] (0xc001bec630) Data frame received for 1 I0626 00:38:00.702755 8 log.go:172] (0xc0027b1360) (1) Data frame handling I0626 00:38:00.702959 8 log.go:172] (0xc0027b1360) (1) Data frame sent I0626 00:38:00.702982 8 log.go:172] (0xc001bec630) (0xc0027b1360) Stream removed, broadcasting: 1 I0626 00:38:00.703002 8 log.go:172] (0xc001bec630) Go away received I0626 00:38:00.703146 8 log.go:172] (0xc001bec630) (0xc0027b1360) Stream removed, broadcasting: 1 I0626 00:38:00.703175 8 log.go:172] (0xc001bec630) (0xc0027b15e0) Stream removed, broadcasting: 3 I0626 00:38:00.703202 8 log.go:172] (0xc001bec630) (0xc002be9360) Stream removed, broadcasting: 5 Jun 26 00:38:00.703: INFO: Deleting pod dns-1888... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:38:00.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1888" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":294,"completed":183,"skipped":2844,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:38:00.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Jun 26 00:38:00.927: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:38:17.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2710" for this suite. • [SLOW TEST:16.874 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":294,"completed":184,"skipped":2844,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:38:17.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 26 00:38:17.712: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jun 26 00:38:19.622: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1708 create -f -' Jun 26 00:38:22.743: INFO: stderr: "" Jun 26 00:38:22.743: INFO: stdout: "e2e-test-crd-publish-openapi-3902-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jun 26 00:38:22.744: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1708 delete e2e-test-crd-publish-openapi-3902-crds test-cr' Jun 26 00:38:22.869: INFO: stderr: "" Jun 26 00:38:22.869: INFO: stdout: "e2e-test-crd-publish-openapi-3902-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Jun 26 00:38:22.869: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1708 apply -f -' Jun 26 00:38:23.177: INFO: stderr: "" Jun 26 00:38:23.177: INFO: stdout: "e2e-test-crd-publish-openapi-3902-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jun 26 00:38:23.177: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1708 delete e2e-test-crd-publish-openapi-3902-crds test-cr' Jun 26 00:38:23.287: INFO: stderr: "" Jun 26 00:38:23.287: INFO: stdout: "e2e-test-crd-publish-openapi-3902-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jun 26 00:38:23.287: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3902-crds' Jun 26 00:38:23.554: INFO: stderr: "" Jun 26 00:38:23.554: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3902-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:38:26.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1708" for this suite. • [SLOW TEST:8.833 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":294,"completed":185,"skipped":2870,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:38:26.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-7493 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 26 00:38:26.499: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 26 00:38:26.597: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 26 00:38:28.601: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 26 00:38:30.602: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 26 00:38:32.602: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 26 00:38:34.602: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 26 00:38:36.602: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 26 00:38:38.602: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 26 00:38:40.602: INFO: The status of Pod netserver-0 is Running (Ready = true) Jun 26 00:38:40.606: INFO: The status of Pod netserver-1 is Running (Ready = false) Jun 26 00:38:42.610: INFO: The status of Pod netserver-1 is Running (Ready = false) Jun 26 00:38:44.611: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jun 26 00:38:48.694: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.189 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7493 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 00:38:48.694: INFO: >>> kubeConfig: /root/.kube/config I0626 00:38:48.717526 8 log.go:172] (0xc0063a02c0) (0xc0021d2aa0) Create stream I0626 00:38:48.717555 8 log.go:172] (0xc0063a02c0) (0xc0021d2aa0) Stream added, broadcasting: 1 I0626 00:38:48.719380 8 log.go:172] (0xc0063a02c0) Reply frame received for 1 I0626 00:38:48.719411 8 log.go:172] (0xc0063a02c0) (0xc0027b1720) Create stream I0626 00:38:48.719422 8 log.go:172] (0xc0063a02c0) (0xc0027b1720) Stream added, broadcasting: 3 I0626 00:38:48.720225 8 log.go:172] (0xc0063a02c0) Reply frame received for 3 I0626 00:38:48.720257 8 log.go:172] (0xc0063a02c0) (0xc001d18b40) Create stream I0626 00:38:48.720269 8 log.go:172] (0xc0063a02c0) (0xc001d18b40) Stream added, broadcasting: 5 I0626 00:38:48.721532 8 log.go:172] (0xc0063a02c0) Reply frame received for 5 I0626 00:38:49.805867 8 log.go:172] (0xc0063a02c0) Data frame received for 3 I0626 00:38:49.805921 8 log.go:172] (0xc0027b1720) (3) Data frame handling I0626 00:38:49.805963 8 log.go:172] (0xc0027b1720) (3) Data frame sent I0626 00:38:49.806102 8 log.go:172] (0xc0063a02c0) Data frame received for 3 I0626 00:38:49.806135 8 log.go:172] (0xc0027b1720) (3) Data frame handling I0626 00:38:49.806248 8 log.go:172] (0xc0063a02c0) Data frame received for 5 I0626 00:38:49.806305 8 log.go:172] (0xc001d18b40) (5) Data frame handling I0626 00:38:49.808798 8 log.go:172] (0xc0063a02c0) Data frame received for 1 I0626 00:38:49.808842 8 log.go:172] (0xc0021d2aa0) (1) Data frame handling I0626 00:38:49.808902 8 log.go:172] (0xc0021d2aa0) (1) Data frame sent I0626 00:38:49.808931 8 log.go:172] (0xc0063a02c0) (0xc0021d2aa0) Stream removed, broadcasting: 1 I0626 00:38:49.808981 8 log.go:172] (0xc0063a02c0) Go away received I0626 00:38:49.809030 8 log.go:172] (0xc0063a02c0) (0xc0021d2aa0) Stream removed, broadcasting: 1 I0626 00:38:49.809055 8 log.go:172] (0xc0063a02c0) (0xc0027b1720) Stream removed, broadcasting: 3 I0626 00:38:49.809074 8 log.go:172] (0xc0063a02c0) (0xc001d18b40) Stream removed, broadcasting: 5 Jun 26 00:38:49.809: INFO: Found all expected endpoints: [netserver-0] Jun 26 00:38:49.812: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.9 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7493 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 00:38:49.812: INFO: >>> kubeConfig: /root/.kube/config I0626 00:38:49.846818 8 log.go:172] (0xc0064802c0) (0xc002dd95e0) Create stream I0626 00:38:49.846845 8 log.go:172] (0xc0064802c0) (0xc002dd95e0) Stream added, broadcasting: 1 I0626 00:38:49.848527 8 log.go:172] (0xc0064802c0) Reply frame received for 1 I0626 00:38:49.848567 8 log.go:172] (0xc0064802c0) (0xc001d18f00) Create stream I0626 00:38:49.848583 8 log.go:172] (0xc0064802c0) (0xc001d18f00) Stream added, broadcasting: 3 I0626 00:38:49.849958 8 log.go:172] (0xc0064802c0) Reply frame received for 3 I0626 00:38:49.850001 8 log.go:172] (0xc0064802c0) (0xc002330000) Create stream I0626 00:38:49.850016 8 log.go:172] (0xc0064802c0) (0xc002330000) Stream added, broadcasting: 5 I0626 00:38:49.851187 8 log.go:172] (0xc0064802c0) Reply frame received for 5 I0626 00:38:50.942746 8 log.go:172] (0xc0064802c0) Data frame received for 3 I0626 00:38:50.942897 8 log.go:172] (0xc001d18f00) (3) Data frame handling I0626 00:38:50.943027 8 log.go:172] (0xc001d18f00) (3) Data frame sent I0626 00:38:50.943056 8 log.go:172] (0xc0064802c0) Data frame received for 5 I0626 00:38:50.943087 8 log.go:172] (0xc002330000) (5) Data frame handling I0626 00:38:50.943402 8 log.go:172] (0xc0064802c0) Data frame received for 3 I0626 00:38:50.943424 8 log.go:172] (0xc001d18f00) (3) Data frame handling I0626 00:38:50.945378 8 log.go:172] (0xc0064802c0) Data frame received for 1 I0626 00:38:50.945414 8 log.go:172] (0xc002dd95e0) (1) Data frame handling I0626 00:38:50.945445 8 log.go:172] (0xc002dd95e0) (1) Data frame sent I0626 00:38:50.945506 8 log.go:172] (0xc0064802c0) (0xc002dd95e0) Stream removed, broadcasting: 1 I0626 00:38:50.945535 8 log.go:172] (0xc0064802c0) Go away received I0626 00:38:50.945595 8 log.go:172] (0xc0064802c0) (0xc002dd95e0) Stream removed, broadcasting: 1 I0626 00:38:50.945624 8 log.go:172] (0xc0064802c0) (0xc001d18f00) Stream removed, broadcasting: 3 I0626 00:38:50.945638 8 log.go:172] (0xc0064802c0) (0xc002330000) Stream removed, broadcasting: 5 Jun 26 00:38:50.945: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:38:50.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7493" for this suite. • [SLOW TEST:24.519 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":186,"skipped":2875,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:38:50.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Jun 26 00:38:51.027: INFO: PodSpec: initContainers in spec.initContainers Jun 26 00:39:39.525: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-596186f1-dceb-4b9f-b9b4-003079c1fef7", GenerateName:"", Namespace:"init-container-6262", SelfLink:"/api/v1/namespaces/init-container-6262/pods/pod-init-596186f1-dceb-4b9f-b9b4-003079c1fef7", UID:"419a0b74-f639-4ca6-8708-0152a69ab0e8", ResourceVersion:"15923761", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63728728731, loc:(*time.Location)(0x80643c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"27329539"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001d28ce0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001d28d00)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001d28d20), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001d28d40)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-4ssvr", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc005c8b380), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4ssvr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4ssvr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4ssvr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc004e68258), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001c9fa40), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004e68310)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004e68330)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc004e68338), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc004e6833c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728731, loc:(*time.Location)(0x80643c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728731, loc:(*time.Location)(0x80643c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728731, loc:(*time.Location)(0x80643c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728731, loc:(*time.Location)(0x80643c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.12", PodIP:"10.244.2.10", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.10"}}, StartTime:(*v1.Time)(0xc001d28d60), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001c9fb90)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001c9fc00)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://50d31be3c52aeb9a284e6aabc68d16b576e40fee442e397e08e59b50c6b76667", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001d28da0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001d28d80), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc004e683ef)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:39:39.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6262" for this suite. • [SLOW TEST:48.595 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":294,"completed":187,"skipped":2891,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:39:39.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:39:52.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8081" for this suite. • [SLOW TEST:13.239 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":294,"completed":188,"skipped":2891,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:39:52.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 26 00:39:53.797: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 26 00:39:55.808: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728793, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728793, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728793, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728793, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 26 00:39:58.883: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 26 00:39:58.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:40:00.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9053" for this suite. STEP: Destroying namespace "webhook-9053-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.425 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":294,"completed":189,"skipped":2893,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:40:00.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:40:00.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-231" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":294,"completed":190,"skipped":2922,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:40:00.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 26 00:40:00.508: INFO: Waiting up to 5m0s for pod "pod-66a5e537-a447-40d6-8ced-9c796ef16a28" in namespace "emptydir-3268" to be "Succeeded or Failed" Jun 26 00:40:00.512: INFO: Pod "pod-66a5e537-a447-40d6-8ced-9c796ef16a28": Phase="Pending", Reason="", readiness=false. Elapsed: 4.121636ms Jun 26 00:40:02.516: INFO: Pod "pod-66a5e537-a447-40d6-8ced-9c796ef16a28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008071445s Jun 26 00:40:04.521: INFO: Pod "pod-66a5e537-a447-40d6-8ced-9c796ef16a28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013000194s STEP: Saw pod success Jun 26 00:40:04.521: INFO: Pod "pod-66a5e537-a447-40d6-8ced-9c796ef16a28" satisfied condition "Succeeded or Failed" Jun 26 00:40:04.524: INFO: Trying to get logs from node latest-worker pod pod-66a5e537-a447-40d6-8ced-9c796ef16a28 container test-container: STEP: delete the pod Jun 26 00:40:04.575: INFO: Waiting for pod pod-66a5e537-a447-40d6-8ced-9c796ef16a28 to disappear Jun 26 00:40:04.588: INFO: Pod pod-66a5e537-a447-40d6-8ced-9c796ef16a28 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:40:04.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3268" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":191,"skipped":2956,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:40:04.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 26 00:40:04.677: INFO: Waiting up to 5m0s for pod "pod-8b21d6cc-fe6f-411b-be1b-11d4521b0c4f" in namespace "emptydir-12" to be "Succeeded or Failed" Jun 26 00:40:04.693: INFO: Pod "pod-8b21d6cc-fe6f-411b-be1b-11d4521b0c4f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.053974ms Jun 26 00:40:06.698: INFO: Pod "pod-8b21d6cc-fe6f-411b-be1b-11d4521b0c4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020635425s Jun 26 00:40:08.704: INFO: Pod "pod-8b21d6cc-fe6f-411b-be1b-11d4521b0c4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026721434s STEP: Saw pod success Jun 26 00:40:08.704: INFO: Pod "pod-8b21d6cc-fe6f-411b-be1b-11d4521b0c4f" satisfied condition "Succeeded or Failed" Jun 26 00:40:08.707: INFO: Trying to get logs from node latest-worker pod pod-8b21d6cc-fe6f-411b-be1b-11d4521b0c4f container test-container: STEP: delete the pod Jun 26 00:40:08.859: INFO: Waiting for pod pod-8b21d6cc-fe6f-411b-be1b-11d4521b0c4f to disappear Jun 26 00:40:08.990: INFO: Pod pod-8b21d6cc-fe6f-411b-be1b-11d4521b0c4f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:40:08.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-12" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":192,"skipped":2973,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:40:09.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-9f8edae7-41bb-4117-9ca5-361ffc93f913 STEP: Creating secret with name s-test-opt-upd-94657eb5-510f-47a1-9622-5f9fcef2c8a3 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-9f8edae7-41bb-4117-9ca5-361ffc93f913 STEP: Updating secret s-test-opt-upd-94657eb5-510f-47a1-9622-5f9fcef2c8a3 STEP: Creating secret with name s-test-opt-create-c0ba1d67-4199-4f8c-b474-ac3a10f3b908 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:41:37.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4937" for this suite. • [SLOW TEST:88.848 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":294,"completed":193,"skipped":2976,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:41:37.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:41:55.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9275" for this suite. • [SLOW TEST:17.196 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":294,"completed":194,"skipped":2982,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:41:55.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 26 00:41:55.252: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"391681a9-14dc-4d3b-a6e4-1b11e264b4fc", Controller:(*bool)(0xc004e98992), BlockOwnerDeletion:(*bool)(0xc004e98993)}} Jun 26 00:41:55.346: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"9e703950-207a-4acf-aeee-5a5ce40363d8", Controller:(*bool)(0xc0051a7d42), BlockOwnerDeletion:(*bool)(0xc0051a7d43)}} Jun 26 00:41:55.351: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"1e5ebef2-d656-43dd-84a0-8137025611c6", Controller:(*bool)(0xc004f58352), BlockOwnerDeletion:(*bool)(0xc004f58353)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:42:00.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4459" for this suite. • [SLOW TEST:5.346 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":294,"completed":195,"skipped":2993,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:42:00.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Jun 26 00:42:00.530: INFO: Waiting up to 5m0s for pod "downward-api-87173484-5181-4a2c-92a6-1ef4d9a920dd" in namespace "downward-api-6039" to be "Succeeded or Failed" Jun 26 00:42:00.532: INFO: Pod "downward-api-87173484-5181-4a2c-92a6-1ef4d9a920dd": Phase="Pending", Reason="", readiness=false. Elapsed: 1.818192ms Jun 26 00:42:02.620: INFO: Pod "downward-api-87173484-5181-4a2c-92a6-1ef4d9a920dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089821541s Jun 26 00:42:04.626: INFO: Pod "downward-api-87173484-5181-4a2c-92a6-1ef4d9a920dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095844314s STEP: Saw pod success Jun 26 00:42:04.626: INFO: Pod "downward-api-87173484-5181-4a2c-92a6-1ef4d9a920dd" satisfied condition "Succeeded or Failed" Jun 26 00:42:04.632: INFO: Trying to get logs from node latest-worker2 pod downward-api-87173484-5181-4a2c-92a6-1ef4d9a920dd container dapi-container: STEP: delete the pod Jun 26 00:42:04.653: INFO: Waiting for pod downward-api-87173484-5181-4a2c-92a6-1ef4d9a920dd to disappear Jun 26 00:42:04.691: INFO: Pod downward-api-87173484-5181-4a2c-92a6-1ef4d9a920dd no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:42:04.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6039" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":294,"completed":196,"skipped":3007,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:42:04.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:42:08.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2763" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":197,"skipped":3022,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:42:08.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 26 00:42:09.387: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Jun 26 00:42:11.397: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728929, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728929, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728929, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728728929, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 26 00:42:14.435: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:42:14.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9462" for this suite. STEP: Destroying namespace "webhook-9462-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.909 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":294,"completed":198,"skipped":3060,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:42:14.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-45424e8e-8c8c-4eda-b9f4-72e9c1771813 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:42:20.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8982" for this suite. • [SLOW TEST:6.163 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":294,"completed":199,"skipped":3084,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:42:20.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command Jun 26 00:42:20.981: INFO: Waiting up to 5m0s for pod "var-expansion-635a6efb-d27c-48ea-a425-e9306d4531ed" in namespace "var-expansion-6383" to be "Succeeded or Failed" Jun 26 00:42:20.989: INFO: Pod "var-expansion-635a6efb-d27c-48ea-a425-e9306d4531ed": Phase="Pending", Reason="", readiness=false. Elapsed: 7.826063ms Jun 26 00:42:22.994: INFO: Pod "var-expansion-635a6efb-d27c-48ea-a425-e9306d4531ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012706668s Jun 26 00:42:25.037: INFO: Pod "var-expansion-635a6efb-d27c-48ea-a425-e9306d4531ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056024174s STEP: Saw pod success Jun 26 00:42:25.037: INFO: Pod "var-expansion-635a6efb-d27c-48ea-a425-e9306d4531ed" satisfied condition "Succeeded or Failed" Jun 26 00:42:25.041: INFO: Trying to get logs from node latest-worker pod var-expansion-635a6efb-d27c-48ea-a425-e9306d4531ed container dapi-container: STEP: delete the pod Jun 26 00:42:25.161: INFO: Waiting for pod var-expansion-635a6efb-d27c-48ea-a425-e9306d4531ed to disappear Jun 26 00:42:25.168: INFO: Pod var-expansion-635a6efb-d27c-48ea-a425-e9306d4531ed no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:42:25.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6383" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":294,"completed":200,"skipped":3151,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:42:25.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Jun 26 00:42:25.221: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:42:32.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6695" for this suite. • [SLOW TEST:7.677 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":294,"completed":201,"skipped":3168,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:42:32.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token Jun 26 00:42:33.458: INFO: created pod pod-service-account-defaultsa Jun 26 00:42:33.458: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jun 26 00:42:33.482: INFO: created pod pod-service-account-mountsa Jun 26 00:42:33.482: INFO: pod pod-service-account-mountsa service account token volume mount: true Jun 26 00:42:33.487: INFO: created pod pod-service-account-nomountsa Jun 26 00:42:33.487: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jun 26 00:42:33.510: INFO: created pod pod-service-account-defaultsa-mountspec Jun 26 00:42:33.511: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jun 26 00:42:33.573: INFO: created pod pod-service-account-mountsa-mountspec Jun 26 00:42:33.573: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jun 26 00:42:33.616: INFO: created pod pod-service-account-nomountsa-mountspec Jun 26 00:42:33.616: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jun 26 00:42:33.657: INFO: created pod pod-service-account-defaultsa-nomountspec Jun 26 00:42:33.657: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jun 26 00:42:33.718: INFO: created pod pod-service-account-mountsa-nomountspec Jun 26 00:42:33.718: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jun 26 00:42:33.765: INFO: created pod pod-service-account-nomountsa-nomountspec Jun 26 00:42:33.765: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:42:33.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4333" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":294,"completed":202,"skipped":3170,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} S ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:42:33.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-0b5ba10d-b884-465f-8179-d4246b75a2da STEP: Creating a pod to test consume secrets Jun 26 00:42:34.003: INFO: Waiting up to 5m0s for pod "pod-secrets-94187bd9-bf57-406d-9e57-c009c6087ec8" in namespace "secrets-4053" to be "Succeeded or Failed" Jun 26 00:42:34.006: INFO: Pod "pod-secrets-94187bd9-bf57-406d-9e57-c009c6087ec8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.568447ms Jun 26 00:42:36.109: INFO: Pod "pod-secrets-94187bd9-bf57-406d-9e57-c009c6087ec8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106513303s Jun 26 00:42:38.189: INFO: Pod "pod-secrets-94187bd9-bf57-406d-9e57-c009c6087ec8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.186680021s Jun 26 00:42:40.483: INFO: Pod "pod-secrets-94187bd9-bf57-406d-9e57-c009c6087ec8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.480271118s Jun 26 00:42:42.585: INFO: Pod "pod-secrets-94187bd9-bf57-406d-9e57-c009c6087ec8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.582706264s Jun 26 00:42:44.711: INFO: Pod "pod-secrets-94187bd9-bf57-406d-9e57-c009c6087ec8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.708777679s Jun 26 00:42:46.866: INFO: Pod "pod-secrets-94187bd9-bf57-406d-9e57-c009c6087ec8": Phase="Running", Reason="", readiness=true. Elapsed: 12.86305286s Jun 26 00:42:48.869: INFO: Pod "pod-secrets-94187bd9-bf57-406d-9e57-c009c6087ec8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.866320397s STEP: Saw pod success Jun 26 00:42:48.869: INFO: Pod "pod-secrets-94187bd9-bf57-406d-9e57-c009c6087ec8" satisfied condition "Succeeded or Failed" Jun 26 00:42:48.871: INFO: Trying to get logs from node latest-worker pod pod-secrets-94187bd9-bf57-406d-9e57-c009c6087ec8 container secret-env-test: STEP: delete the pod Jun 26 00:42:48.912: INFO: Waiting for pod pod-secrets-94187bd9-bf57-406d-9e57-c009c6087ec8 to disappear Jun 26 00:42:48.935: INFO: Pod pod-secrets-94187bd9-bf57-406d-9e57-c009c6087ec8 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:42:48.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4053" for this suite. • [SLOW TEST:15.041 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":294,"completed":203,"skipped":3171,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:42:48.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:43:05.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5247" for this suite. • [SLOW TEST:16.300 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":294,"completed":204,"skipped":3181,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:43:05.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-483, will wait for the garbage collector to delete the pods Jun 26 00:43:11.538: INFO: Deleting Job.batch foo took: 6.459177ms Jun 26 00:43:11.938: INFO: Terminating Job.batch foo pods took: 400.289066ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:43:55.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-483" for this suite. • [SLOW TEST:50.106 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":294,"completed":205,"skipped":3208,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:43:55.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:44:06.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-741" for this suite. • [SLOW TEST:11.174 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":294,"completed":206,"skipped":3213,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:44:06.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jun 26 00:44:06.622: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:44:25.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8379" for this suite. • [SLOW TEST:18.737 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":294,"completed":207,"skipped":3226,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:44:25.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 26 00:44:25.343: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config version' Jun 26 00:44:25.487: INFO: stderr: "" Jun 26 00:44:25.487: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-beta.1.98+60b800358f7784\", GitCommit:\"60b800358f77848c4fac5376796e8a82b9039eb4\", GitTreeState:\"clean\", BuildDate:\"2020-06-08T12:34:27Z\", GoVersion:\"go1.13.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-04-28T05:35:31Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:44:25.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7145" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":294,"completed":208,"skipped":3241,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:44:25.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 26 00:44:29.627: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:44:29.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8705" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":294,"completed":209,"skipped":3260,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:44:29.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-8dc0a182-2555-4342-b0d7-7f0de136a528 STEP: Creating a pod to test consume secrets Jun 26 00:44:29.778: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b98389a2-a691-40aa-a240-4198e5c41af6" in namespace "projected-4235" to be "Succeeded or Failed" Jun 26 00:44:29.791: INFO: Pod "pod-projected-secrets-b98389a2-a691-40aa-a240-4198e5c41af6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.756231ms Jun 26 00:44:32.040: INFO: Pod "pod-projected-secrets-b98389a2-a691-40aa-a240-4198e5c41af6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.262079556s Jun 26 00:44:34.045: INFO: Pod "pod-projected-secrets-b98389a2-a691-40aa-a240-4198e5c41af6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.26667907s STEP: Saw pod success Jun 26 00:44:34.045: INFO: Pod "pod-projected-secrets-b98389a2-a691-40aa-a240-4198e5c41af6" satisfied condition "Succeeded or Failed" Jun 26 00:44:34.048: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-b98389a2-a691-40aa-a240-4198e5c41af6 container projected-secret-volume-test: STEP: delete the pod Jun 26 00:44:34.099: INFO: Waiting for pod pod-projected-secrets-b98389a2-a691-40aa-a240-4198e5c41af6 to disappear Jun 26 00:44:34.111: INFO: Pod pod-projected-secrets-b98389a2-a691-40aa-a240-4198e5c41af6 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:44:34.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4235" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":210,"skipped":3265,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:44:34.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:44:34.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5828" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":294,"completed":211,"skipped":3317,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:44:34.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-xb8n STEP: Creating a pod to test atomic-volume-subpath Jun 26 00:44:34.401: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-xb8n" in namespace "subpath-7986" to be "Succeeded or Failed" Jun 26 00:44:34.415: INFO: Pod "pod-subpath-test-downwardapi-xb8n": Phase="Pending", Reason="", readiness=false. Elapsed: 14.093543ms Jun 26 00:44:36.419: INFO: Pod "pod-subpath-test-downwardapi-xb8n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018223269s Jun 26 00:44:38.424: INFO: Pod "pod-subpath-test-downwardapi-xb8n": Phase="Running", Reason="", readiness=true. Elapsed: 4.022684925s Jun 26 00:44:40.428: INFO: Pod "pod-subpath-test-downwardapi-xb8n": Phase="Running", Reason="", readiness=true. Elapsed: 6.02735015s Jun 26 00:44:42.433: INFO: Pod "pod-subpath-test-downwardapi-xb8n": Phase="Running", Reason="", readiness=true. Elapsed: 8.032477816s Jun 26 00:44:44.437: INFO: Pod "pod-subpath-test-downwardapi-xb8n": Phase="Running", Reason="", readiness=true. Elapsed: 10.036007114s Jun 26 00:44:46.441: INFO: Pod "pod-subpath-test-downwardapi-xb8n": Phase="Running", Reason="", readiness=true. Elapsed: 12.040164479s Jun 26 00:44:48.446: INFO: Pod "pod-subpath-test-downwardapi-xb8n": Phase="Running", Reason="", readiness=true. Elapsed: 14.045041224s Jun 26 00:44:50.450: INFO: Pod "pod-subpath-test-downwardapi-xb8n": Phase="Running", Reason="", readiness=true. Elapsed: 16.049287585s Jun 26 00:44:52.455: INFO: Pod "pod-subpath-test-downwardapi-xb8n": Phase="Running", Reason="", readiness=true. Elapsed: 18.05371683s Jun 26 00:44:54.459: INFO: Pod "pod-subpath-test-downwardapi-xb8n": Phase="Running", Reason="", readiness=true. Elapsed: 20.058220977s Jun 26 00:44:56.464: INFO: Pod "pod-subpath-test-downwardapi-xb8n": Phase="Running", Reason="", readiness=true. Elapsed: 22.062740955s Jun 26 00:44:58.468: INFO: Pod "pod-subpath-test-downwardapi-xb8n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.066692779s STEP: Saw pod success Jun 26 00:44:58.468: INFO: Pod "pod-subpath-test-downwardapi-xb8n" satisfied condition "Succeeded or Failed" Jun 26 00:44:58.470: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-downwardapi-xb8n container test-container-subpath-downwardapi-xb8n: STEP: delete the pod Jun 26 00:44:58.546: INFO: Waiting for pod pod-subpath-test-downwardapi-xb8n to disappear Jun 26 00:44:58.573: INFO: Pod pod-subpath-test-downwardapi-xb8n no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-xb8n Jun 26 00:44:58.574: INFO: Deleting pod "pod-subpath-test-downwardapi-xb8n" in namespace "subpath-7986" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:44:58.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7986" for this suite. • [SLOW TEST:24.306 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":294,"completed":212,"skipped":3337,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:44:58.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-0606c637-6c1e-45d5-9774-2ea7236aa9f3 STEP: Creating a pod to test consume secrets Jun 26 00:44:58.738: INFO: Waiting up to 5m0s for pod "pod-secrets-8737f280-4d49-4d33-b436-022727c05533" in namespace "secrets-2341" to be "Succeeded or Failed" Jun 26 00:44:58.740: INFO: Pod "pod-secrets-8737f280-4d49-4d33-b436-022727c05533": Phase="Pending", Reason="", readiness=false. Elapsed: 2.627915ms Jun 26 00:45:00.745: INFO: Pod "pod-secrets-8737f280-4d49-4d33-b436-022727c05533": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007080866s Jun 26 00:45:02.750: INFO: Pod "pod-secrets-8737f280-4d49-4d33-b436-022727c05533": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011662322s STEP: Saw pod success Jun 26 00:45:02.750: INFO: Pod "pod-secrets-8737f280-4d49-4d33-b436-022727c05533" satisfied condition "Succeeded or Failed" Jun 26 00:45:02.752: INFO: Trying to get logs from node latest-worker pod pod-secrets-8737f280-4d49-4d33-b436-022727c05533 container secret-volume-test: STEP: delete the pod Jun 26 00:45:02.772: INFO: Waiting for pod pod-secrets-8737f280-4d49-4d33-b436-022727c05533 to disappear Jun 26 00:45:02.790: INFO: Pod pod-secrets-8737f280-4d49-4d33-b436-022727c05533 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:45:02.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2341" for this suite. STEP: Destroying namespace "secret-namespace-1534" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":294,"completed":213,"skipped":3342,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:45:02.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-rzdz STEP: Creating a pod to test atomic-volume-subpath Jun 26 00:45:02.924: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-rzdz" in namespace "subpath-1914" to be "Succeeded or Failed" Jun 26 00:45:02.939: INFO: Pod "pod-subpath-test-configmap-rzdz": Phase="Pending", Reason="", readiness=false. Elapsed: 15.746504ms Jun 26 00:45:04.943: INFO: Pod "pod-subpath-test-configmap-rzdz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019874193s Jun 26 00:45:06.948: INFO: Pod "pod-subpath-test-configmap-rzdz": Phase="Running", Reason="", readiness=true. Elapsed: 4.024469933s Jun 26 00:45:08.953: INFO: Pod "pod-subpath-test-configmap-rzdz": Phase="Running", Reason="", readiness=true. Elapsed: 6.029453296s Jun 26 00:45:10.958: INFO: Pod "pod-subpath-test-configmap-rzdz": Phase="Running", Reason="", readiness=true. Elapsed: 8.034125843s Jun 26 00:45:12.963: INFO: Pod "pod-subpath-test-configmap-rzdz": Phase="Running", Reason="", readiness=true. Elapsed: 10.039086735s Jun 26 00:45:14.967: INFO: Pod "pod-subpath-test-configmap-rzdz": Phase="Running", Reason="", readiness=true. Elapsed: 12.043114496s Jun 26 00:45:16.974: INFO: Pod "pod-subpath-test-configmap-rzdz": Phase="Running", Reason="", readiness=true. Elapsed: 14.050343113s Jun 26 00:45:18.979: INFO: Pod "pod-subpath-test-configmap-rzdz": Phase="Running", Reason="", readiness=true. Elapsed: 16.055125151s Jun 26 00:45:20.984: INFO: Pod "pod-subpath-test-configmap-rzdz": Phase="Running", Reason="", readiness=true. Elapsed: 18.060236974s Jun 26 00:45:22.989: INFO: Pod "pod-subpath-test-configmap-rzdz": Phase="Running", Reason="", readiness=true. Elapsed: 20.06546002s Jun 26 00:45:24.993: INFO: Pod "pod-subpath-test-configmap-rzdz": Phase="Running", Reason="", readiness=true. Elapsed: 22.069865691s Jun 26 00:45:26.999: INFO: Pod "pod-subpath-test-configmap-rzdz": Phase="Running", Reason="", readiness=true. Elapsed: 24.075001995s Jun 26 00:45:29.004: INFO: Pod "pod-subpath-test-configmap-rzdz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.080073521s STEP: Saw pod success Jun 26 00:45:29.004: INFO: Pod "pod-subpath-test-configmap-rzdz" satisfied condition "Succeeded or Failed" Jun 26 00:45:29.007: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-rzdz container test-container-subpath-configmap-rzdz: STEP: delete the pod Jun 26 00:45:29.024: INFO: Waiting for pod pod-subpath-test-configmap-rzdz to disappear Jun 26 00:45:29.028: INFO: Pod pod-subpath-test-configmap-rzdz no longer exists STEP: Deleting pod pod-subpath-test-configmap-rzdz Jun 26 00:45:29.028: INFO: Deleting pod "pod-subpath-test-configmap-rzdz" in namespace "subpath-1914" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:45:29.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1914" for this suite. • [SLOW TEST:26.264 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":294,"completed":214,"skipped":3370,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:45:29.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Jun 26 00:45:29.142: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:45:37.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2806" for this suite. • [SLOW TEST:8.215 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":294,"completed":215,"skipped":3389,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:45:37.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 26 00:45:37.409: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 26 00:45:37.420: INFO: Waiting for terminating namespaces to be deleted... Jun 26 00:45:37.423: INFO: Logging pods the apiserver thinks is on node latest-worker before test Jun 26 00:45:37.427: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) Jun 26 00:45:37.427: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 Jun 26 00:45:37.427: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) Jun 26 00:45:37.427: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 Jun 26 00:45:37.427: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jun 26 00:45:37.427: INFO: Container kindnet-cni ready: true, restart count 5 Jun 26 00:45:37.427: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jun 26 00:45:37.427: INFO: Container kube-proxy ready: true, restart count 0 Jun 26 00:45:37.427: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Jun 26 00:45:37.431: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) Jun 26 00:45:37.431: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 Jun 26 00:45:37.431: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) Jun 26 00:45:37.431: INFO: Container terminate-cmd-rpa ready: true, restart count 2 Jun 26 00:45:37.431: INFO: pod-init-ddc98334-40fb-4065-b3ad-51f7eb858a6a from init-container-2806 started at 2020-06-26 00:45:29 +0000 UTC (1 container statuses recorded) Jun 26 00:45:37.431: INFO: Container run1 ready: true, restart count 0 Jun 26 00:45:37.431: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jun 26 00:45:37.431: INFO: Container kindnet-cni ready: true, restart count 5 Jun 26 00:45:37.431: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jun 26 00:45:37.431: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Jun 26 00:45:37.566: INFO: Pod rally-c184502e-30nwopzm requesting resource cpu=0m on Node latest-worker Jun 26 00:45:37.566: INFO: Pod terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 requesting resource cpu=0m on Node latest-worker2 Jun 26 00:45:37.566: INFO: Pod pod-init-ddc98334-40fb-4065-b3ad-51f7eb858a6a requesting resource cpu=100m on Node latest-worker2 Jun 26 00:45:37.566: INFO: Pod kindnet-hg2tf requesting resource cpu=100m on Node latest-worker Jun 26 00:45:37.566: INFO: Pod kindnet-jl4dn requesting resource cpu=100m on Node latest-worker2 Jun 26 00:45:37.566: INFO: Pod kube-proxy-c8n27 requesting resource cpu=0m on Node latest-worker Jun 26 00:45:37.566: INFO: Pod kube-proxy-pcmmp requesting resource cpu=0m on Node latest-worker2 STEP: Starting Pods to consume most of the cluster CPU. Jun 26 00:45:37.566: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker Jun 26 00:45:37.572: INFO: Creating a pod which consumes cpu=11060m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-544947cd-185c-4980-bf8e-895c564a6028.161bf16b95bd05cb], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7547/filler-pod-544947cd-185c-4980-bf8e-895c564a6028 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-544947cd-185c-4980-bf8e-895c564a6028.161bf16be0e45e75], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-544947cd-185c-4980-bf8e-895c564a6028.161bf16c2744632e], Reason = [Created], Message = [Created container filler-pod-544947cd-185c-4980-bf8e-895c564a6028] STEP: Considering event: Type = [Normal], Name = [filler-pod-544947cd-185c-4980-bf8e-895c564a6028.161bf16c4c794c95], Reason = [Started], Message = [Started container filler-pod-544947cd-185c-4980-bf8e-895c564a6028] STEP: Considering event: Type = [Normal], Name = [filler-pod-fabc66d9-0f70-4e61-aa45-7fb827a8dae0.161bf16b975c9178], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7547/filler-pod-fabc66d9-0f70-4e61-aa45-7fb827a8dae0 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-fabc66d9-0f70-4e61-aa45-7fb827a8dae0.161bf16c19b45949], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-fabc66d9-0f70-4e61-aa45-7fb827a8dae0.161bf16c5481d23c], Reason = [Created], Message = [Created container filler-pod-fabc66d9-0f70-4e61-aa45-7fb827a8dae0] STEP: Considering event: Type = [Normal], Name = [filler-pod-fabc66d9-0f70-4e61-aa45-7fb827a8dae0.161bf16c62d21b0d], Reason = [Started], Message = [Started container filler-pod-fabc66d9-0f70-4e61-aa45-7fb827a8dae0] STEP: Considering event: Type = [Warning], Name = [additional-pod.161bf16c8cf7fe1c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.161bf16c8e1823e5], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:45:43.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7547" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:5.934 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":294,"completed":216,"skipped":3418,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:45:43.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 26 00:45:43.308: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8e63015c-3e7a-4683-9c27-fccb6476b893" in namespace "downward-api-3793" to be "Succeeded or Failed" Jun 26 00:45:43.440: INFO: Pod "downwardapi-volume-8e63015c-3e7a-4683-9c27-fccb6476b893": Phase="Pending", Reason="", readiness=false. Elapsed: 132.050201ms Jun 26 00:45:45.443: INFO: Pod "downwardapi-volume-8e63015c-3e7a-4683-9c27-fccb6476b893": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135566504s Jun 26 00:45:47.448: INFO: Pod "downwardapi-volume-8e63015c-3e7a-4683-9c27-fccb6476b893": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.13977998s STEP: Saw pod success Jun 26 00:45:47.448: INFO: Pod "downwardapi-volume-8e63015c-3e7a-4683-9c27-fccb6476b893" satisfied condition "Succeeded or Failed" Jun 26 00:45:47.451: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-8e63015c-3e7a-4683-9c27-fccb6476b893 container client-container: STEP: delete the pod Jun 26 00:45:47.492: INFO: Waiting for pod downwardapi-volume-8e63015c-3e7a-4683-9c27-fccb6476b893 to disappear Jun 26 00:45:47.503: INFO: Pod downwardapi-volume-8e63015c-3e7a-4683-9c27-fccb6476b893 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:45:47.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3793" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":217,"skipped":3441,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:45:47.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 26 00:45:47.643: INFO: Waiting up to 5m0s for pod "pod-ab3deb63-a71b-4253-8a76-4d984b995fea" in namespace "emptydir-5085" to be "Succeeded or Failed" Jun 26 00:45:47.646: INFO: Pod "pod-ab3deb63-a71b-4253-8a76-4d984b995fea": Phase="Pending", Reason="", readiness=false. Elapsed: 3.757961ms Jun 26 00:45:49.747: INFO: Pod "pod-ab3deb63-a71b-4253-8a76-4d984b995fea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10472578s Jun 26 00:45:51.752: INFO: Pod "pod-ab3deb63-a71b-4253-8a76-4d984b995fea": Phase="Running", Reason="", readiness=true. Elapsed: 4.109609113s Jun 26 00:45:53.756: INFO: Pod "pod-ab3deb63-a71b-4253-8a76-4d984b995fea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.11371606s STEP: Saw pod success Jun 26 00:45:53.756: INFO: Pod "pod-ab3deb63-a71b-4253-8a76-4d984b995fea" satisfied condition "Succeeded or Failed" Jun 26 00:45:53.762: INFO: Trying to get logs from node latest-worker2 pod pod-ab3deb63-a71b-4253-8a76-4d984b995fea container test-container: STEP: delete the pod Jun 26 00:45:53.799: INFO: Waiting for pod pod-ab3deb63-a71b-4253-8a76-4d984b995fea to disappear Jun 26 00:45:53.808: INFO: Pod pod-ab3deb63-a71b-4253-8a76-4d984b995fea no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:45:53.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5085" for this suite. • [SLOW TEST:6.249 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":218,"skipped":3443,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:45:53.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 26 00:45:53.920: INFO: Creating ReplicaSet my-hostname-basic-6902dcd4-37b6-40b0-9d11-d012291f0953 Jun 26 00:45:53.946: INFO: Pod name my-hostname-basic-6902dcd4-37b6-40b0-9d11-d012291f0953: Found 0 pods out of 1 Jun 26 00:45:58.950: INFO: Pod name my-hostname-basic-6902dcd4-37b6-40b0-9d11-d012291f0953: Found 1 pods out of 1 Jun 26 00:45:58.950: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-6902dcd4-37b6-40b0-9d11-d012291f0953" is running Jun 26 00:45:58.953: INFO: Pod "my-hostname-basic-6902dcd4-37b6-40b0-9d11-d012291f0953-f9q2v" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-26 00:45:54 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-26 00:45:57 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-26 00:45:57 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-26 00:45:53 +0000 UTC Reason: Message:}]) Jun 26 00:45:58.953: INFO: Trying to dial the pod Jun 26 00:46:04.019: INFO: Controller my-hostname-basic-6902dcd4-37b6-40b0-9d11-d012291f0953: Got expected result from replica 1 [my-hostname-basic-6902dcd4-37b6-40b0-9d11-d012291f0953-f9q2v]: "my-hostname-basic-6902dcd4-37b6-40b0-9d11-d012291f0953-f9q2v", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:46:04.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9519" for this suite. • [SLOW TEST:10.210 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":294,"completed":219,"skipped":3443,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:46:04.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 26 00:46:04.185: INFO: Waiting up to 5m0s for pod "downwardapi-volume-19469db5-6cc3-49f4-a928-72c259de8d7f" in namespace "projected-4557" to be "Succeeded or Failed" Jun 26 00:46:04.189: INFO: Pod "downwardapi-volume-19469db5-6cc3-49f4-a928-72c259de8d7f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.565289ms Jun 26 00:46:06.245: INFO: Pod "downwardapi-volume-19469db5-6cc3-49f4-a928-72c259de8d7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059565286s Jun 26 00:46:08.250: INFO: Pod "downwardapi-volume-19469db5-6cc3-49f4-a928-72c259de8d7f": Phase="Running", Reason="", readiness=true. Elapsed: 4.064796869s Jun 26 00:46:10.253: INFO: Pod "downwardapi-volume-19469db5-6cc3-49f4-a928-72c259de8d7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.068463236s STEP: Saw pod success Jun 26 00:46:10.254: INFO: Pod "downwardapi-volume-19469db5-6cc3-49f4-a928-72c259de8d7f" satisfied condition "Succeeded or Failed" Jun 26 00:46:10.256: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-19469db5-6cc3-49f4-a928-72c259de8d7f container client-container: STEP: delete the pod Jun 26 00:46:10.272: INFO: Waiting for pod downwardapi-volume-19469db5-6cc3-49f4-a928-72c259de8d7f to disappear Jun 26 00:46:10.277: INFO: Pod downwardapi-volume-19469db5-6cc3-49f4-a928-72c259de8d7f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:46:10.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4557" for this suite. • [SLOW TEST:6.278 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":220,"skipped":3452,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:46:10.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 26 00:46:10.995: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 26 00:46:13.388: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728729170, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728729170, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728729171, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728729170, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 26 00:46:16.443: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:46:26.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-881" for this suite. STEP: Destroying namespace "webhook-881-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.500 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":294,"completed":221,"skipped":3495,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:46:26.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:46:30.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7119" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":294,"completed":222,"skipped":3503,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:46:30.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-8efa3097-f318-443e-a889-e5fe8bcb8818 STEP: Creating a pod to test consume secrets Jun 26 00:46:31.015: INFO: Waiting up to 5m0s for pod "pod-secrets-5249b575-43c2-4adf-be1c-ae1b1ed2ae24" in namespace "secrets-2293" to be "Succeeded or Failed" Jun 26 00:46:31.033: INFO: Pod "pod-secrets-5249b575-43c2-4adf-be1c-ae1b1ed2ae24": Phase="Pending", Reason="", readiness=false. Elapsed: 17.651108ms Jun 26 00:46:33.035: INFO: Pod "pod-secrets-5249b575-43c2-4adf-be1c-ae1b1ed2ae24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019939145s Jun 26 00:46:35.040: INFO: Pod "pod-secrets-5249b575-43c2-4adf-be1c-ae1b1ed2ae24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024398012s STEP: Saw pod success Jun 26 00:46:35.040: INFO: Pod "pod-secrets-5249b575-43c2-4adf-be1c-ae1b1ed2ae24" satisfied condition "Succeeded or Failed" Jun 26 00:46:35.044: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-5249b575-43c2-4adf-be1c-ae1b1ed2ae24 container secret-volume-test: STEP: delete the pod Jun 26 00:46:35.079: INFO: Waiting for pod pod-secrets-5249b575-43c2-4adf-be1c-ae1b1ed2ae24 to disappear Jun 26 00:46:35.091: INFO: Pod pod-secrets-5249b575-43c2-4adf-be1c-ae1b1ed2ae24 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:46:35.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2293" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":223,"skipped":3507,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:46:35.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:85 Jun 26 00:46:35.175: INFO: Waiting up to 1m0s for all nodes to be ready Jun 26 00:47:35.209: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Jun 26 00:47:35.223: INFO: Created pod: pod0-sched-preemption-low-priority Jun 26 00:47:35.257: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:47:59.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-2461" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:75 • [SLOW TEST:84.661 seconds] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":294,"completed":224,"skipped":3518,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:47:59.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Jun 26 00:48:04.462: INFO: Successfully updated pod "annotationupdate38584d46-f07b-4469-8f2e-e632b2710f71" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:48:06.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7395" for this suite. • [SLOW TEST:6.737 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":294,"completed":225,"skipped":3548,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:48:06.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-a9203422-1b6f-49c0-a69b-fb3cbb342f07 in namespace container-probe-180 Jun 26 00:48:10.610: INFO: Started pod liveness-a9203422-1b6f-49c0-a69b-fb3cbb342f07 in namespace container-probe-180 STEP: checking the pod's current state and verifying that restartCount is present Jun 26 00:48:10.613: INFO: Initial restart count of pod liveness-a9203422-1b6f-49c0-a69b-fb3cbb342f07 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:52:11.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-180" for this suite. • [SLOW TEST:244.873 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":294,"completed":226,"skipped":3558,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:52:11.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Jun 26 00:52:11.778: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9908' Jun 26 00:52:15.098: INFO: stderr: "" Jun 26 00:52:15.098: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 26 00:52:15.098: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9908' Jun 26 00:52:15.241: INFO: stderr: "" Jun 26 00:52:15.241: INFO: stdout: "update-demo-nautilus-dl68d update-demo-nautilus-vf9ld " Jun 26 00:52:15.241: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dl68d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9908' Jun 26 00:52:15.365: INFO: stderr: "" Jun 26 00:52:15.365: INFO: stdout: "" Jun 26 00:52:15.365: INFO: update-demo-nautilus-dl68d is created but not running Jun 26 00:52:20.365: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9908' Jun 26 00:52:20.471: INFO: stderr: "" Jun 26 00:52:20.471: INFO: stdout: "update-demo-nautilus-dl68d update-demo-nautilus-vf9ld " Jun 26 00:52:20.471: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dl68d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9908' Jun 26 00:52:20.569: INFO: stderr: "" Jun 26 00:52:20.569: INFO: stdout: "true" Jun 26 00:52:20.569: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dl68d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9908' Jun 26 00:52:20.672: INFO: stderr: "" Jun 26 00:52:20.672: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 26 00:52:20.672: INFO: validating pod update-demo-nautilus-dl68d Jun 26 00:52:20.701: INFO: got data: { "image": "nautilus.jpg" } Jun 26 00:52:20.701: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 26 00:52:20.701: INFO: update-demo-nautilus-dl68d is verified up and running Jun 26 00:52:20.701: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vf9ld -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9908' Jun 26 00:52:20.811: INFO: stderr: "" Jun 26 00:52:20.811: INFO: stdout: "true" Jun 26 00:52:20.811: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vf9ld -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9908' Jun 26 00:52:20.932: INFO: stderr: "" Jun 26 00:52:20.932: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 26 00:52:20.932: INFO: validating pod update-demo-nautilus-vf9ld Jun 26 00:52:20.945: INFO: got data: { "image": "nautilus.jpg" } Jun 26 00:52:20.945: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 26 00:52:20.945: INFO: update-demo-nautilus-vf9ld is verified up and running STEP: scaling down the replication controller Jun 26 00:52:20.948: INFO: scanned /root for discovery docs: Jun 26 00:52:20.948: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9908' Jun 26 00:52:22.132: INFO: stderr: "" Jun 26 00:52:22.133: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 26 00:52:22.133: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9908' Jun 26 00:52:22.270: INFO: stderr: "" Jun 26 00:52:22.270: INFO: stdout: "update-demo-nautilus-dl68d update-demo-nautilus-vf9ld " STEP: Replicas for name=update-demo: expected=1 actual=2 Jun 26 00:52:27.271: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9908' Jun 26 00:52:27.379: INFO: stderr: "" Jun 26 00:52:27.379: INFO: stdout: "update-demo-nautilus-dl68d update-demo-nautilus-vf9ld " STEP: Replicas for name=update-demo: expected=1 actual=2 Jun 26 00:52:32.380: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9908' Jun 26 00:52:32.492: INFO: stderr: "" Jun 26 00:52:32.492: INFO: stdout: "update-demo-nautilus-dl68d update-demo-nautilus-vf9ld " STEP: Replicas for name=update-demo: expected=1 actual=2 Jun 26 00:52:37.492: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9908' Jun 26 00:52:37.582: INFO: stderr: "" Jun 26 00:52:37.582: INFO: stdout: "update-demo-nautilus-vf9ld " Jun 26 00:52:37.582: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vf9ld -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9908' Jun 26 00:52:37.676: INFO: stderr: "" Jun 26 00:52:37.676: INFO: stdout: "true" Jun 26 00:52:37.676: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vf9ld -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9908' Jun 26 00:52:37.783: INFO: stderr: "" Jun 26 00:52:37.783: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 26 00:52:37.783: INFO: validating pod update-demo-nautilus-vf9ld Jun 26 00:52:37.786: INFO: got data: { "image": "nautilus.jpg" } Jun 26 00:52:37.786: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 26 00:52:37.786: INFO: update-demo-nautilus-vf9ld is verified up and running STEP: scaling up the replication controller Jun 26 00:52:37.787: INFO: scanned /root for discovery docs: Jun 26 00:52:37.787: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9908' Jun 26 00:52:38.944: INFO: stderr: "" Jun 26 00:52:38.944: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 26 00:52:38.944: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9908' Jun 26 00:52:39.052: INFO: stderr: "" Jun 26 00:52:39.052: INFO: stdout: "update-demo-nautilus-58vsw update-demo-nautilus-vf9ld " Jun 26 00:52:39.052: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-58vsw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9908' Jun 26 00:52:39.137: INFO: stderr: "" Jun 26 00:52:39.137: INFO: stdout: "" Jun 26 00:52:39.138: INFO: update-demo-nautilus-58vsw is created but not running Jun 26 00:52:44.138: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9908' Jun 26 00:52:44.240: INFO: stderr: "" Jun 26 00:52:44.240: INFO: stdout: "update-demo-nautilus-58vsw update-demo-nautilus-vf9ld " Jun 26 00:52:44.240: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-58vsw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9908' Jun 26 00:52:44.343: INFO: stderr: "" Jun 26 00:52:44.343: INFO: stdout: "true" Jun 26 00:52:44.343: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-58vsw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9908' Jun 26 00:52:44.444: INFO: stderr: "" Jun 26 00:52:44.444: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 26 00:52:44.444: INFO: validating pod update-demo-nautilus-58vsw Jun 26 00:52:44.448: INFO: got data: { "image": "nautilus.jpg" } Jun 26 00:52:44.448: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 26 00:52:44.448: INFO: update-demo-nautilus-58vsw is verified up and running Jun 26 00:52:44.448: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vf9ld -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9908' Jun 26 00:52:44.544: INFO: stderr: "" Jun 26 00:52:44.544: INFO: stdout: "true" Jun 26 00:52:44.544: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vf9ld -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9908' Jun 26 00:52:44.641: INFO: stderr: "" Jun 26 00:52:44.641: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 26 00:52:44.641: INFO: validating pod update-demo-nautilus-vf9ld Jun 26 00:52:44.644: INFO: got data: { "image": "nautilus.jpg" } Jun 26 00:52:44.644: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 26 00:52:44.644: INFO: update-demo-nautilus-vf9ld is verified up and running STEP: using delete to clean up resources Jun 26 00:52:44.644: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9908' Jun 26 00:52:44.779: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 26 00:52:44.779: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 26 00:52:44.779: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9908' Jun 26 00:52:44.885: INFO: stderr: "No resources found in kubectl-9908 namespace.\n" Jun 26 00:52:44.885: INFO: stdout: "" Jun 26 00:52:44.885: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9908 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 26 00:52:44.991: INFO: stderr: "" Jun 26 00:52:44.991: INFO: stdout: "update-demo-nautilus-58vsw\nupdate-demo-nautilus-vf9ld\n" Jun 26 00:52:45.491: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9908' Jun 26 00:52:45.605: INFO: stderr: "No resources found in kubectl-9908 namespace.\n" Jun 26 00:52:45.605: INFO: stdout: "" Jun 26 00:52:45.605: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9908 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 26 00:52:45.704: INFO: stderr: "" Jun 26 00:52:45.704: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:52:45.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9908" for this suite. • [SLOW TEST:34.338 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":294,"completed":227,"skipped":3563,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:52:45.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jun 26 00:52:45.861: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2093 /api/v1/namespaces/watch-2093/configmaps/e2e-watch-test-configmap-a 90a26d41-ede0-441d-bea2-e5b320ca0cca 15927481 0 2020-06-26 00:52:45 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-06-26 00:52:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jun 26 00:52:45.862: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2093 /api/v1/namespaces/watch-2093/configmaps/e2e-watch-test-configmap-a 90a26d41-ede0-441d-bea2-e5b320ca0cca 15927481 0 2020-06-26 00:52:45 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-06-26 00:52:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jun 26 00:52:55.868: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2093 /api/v1/namespaces/watch-2093/configmaps/e2e-watch-test-configmap-a 90a26d41-ede0-441d-bea2-e5b320ca0cca 15927542 0 2020-06-26 00:52:45 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-06-26 00:52:55 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 26 00:52:55.869: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2093 /api/v1/namespaces/watch-2093/configmaps/e2e-watch-test-configmap-a 90a26d41-ede0-441d-bea2-e5b320ca0cca 15927542 0 2020-06-26 00:52:45 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-06-26 00:52:55 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jun 26 00:53:05.876: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2093 /api/v1/namespaces/watch-2093/configmaps/e2e-watch-test-configmap-a 90a26d41-ede0-441d-bea2-e5b320ca0cca 15927574 0 2020-06-26 00:52:45 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-06-26 00:53:05 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 26 00:53:05.877: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2093 /api/v1/namespaces/watch-2093/configmaps/e2e-watch-test-configmap-a 90a26d41-ede0-441d-bea2-e5b320ca0cca 15927574 0 2020-06-26 00:52:45 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-06-26 00:53:05 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jun 26 00:53:15.884: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2093 /api/v1/namespaces/watch-2093/configmaps/e2e-watch-test-configmap-a 90a26d41-ede0-441d-bea2-e5b320ca0cca 15927604 0 2020-06-26 00:52:45 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-06-26 00:53:05 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 26 00:53:15.884: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2093 /api/v1/namespaces/watch-2093/configmaps/e2e-watch-test-configmap-a 90a26d41-ede0-441d-bea2-e5b320ca0cca 15927604 0 2020-06-26 00:52:45 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-06-26 00:53:05 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jun 26 00:53:25.906: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2093 /api/v1/namespaces/watch-2093/configmaps/e2e-watch-test-configmap-b 21371d20-b0c8-499e-9b52-6b3f7471f5fd 15927634 0 2020-06-26 00:53:25 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-06-26 00:53:25 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jun 26 00:53:25.906: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2093 /api/v1/namespaces/watch-2093/configmaps/e2e-watch-test-configmap-b 21371d20-b0c8-499e-9b52-6b3f7471f5fd 15927634 0 2020-06-26 00:53:25 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-06-26 00:53:25 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jun 26 00:53:35.926: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2093 /api/v1/namespaces/watch-2093/configmaps/e2e-watch-test-configmap-b 21371d20-b0c8-499e-9b52-6b3f7471f5fd 15927662 0 2020-06-26 00:53:25 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-06-26 00:53:25 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jun 26 00:53:35.926: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2093 /api/v1/namespaces/watch-2093/configmaps/e2e-watch-test-configmap-b 21371d20-b0c8-499e-9b52-6b3f7471f5fd 15927662 0 2020-06-26 00:53:25 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-06-26 00:53:25 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:53:45.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2093" for this suite. • [SLOW TEST:60.225 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":294,"completed":228,"skipped":3565,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:53:45.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:53:46.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-163" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":294,"completed":229,"skipped":3567,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:53:46.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:53:57.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5812" for this suite. • [SLOW TEST:11.268 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":294,"completed":230,"skipped":3625,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:53:57.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server Jun 26 00:53:57.501: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:53:57.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1360" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":294,"completed":231,"skipped":3638,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:53:57.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-b184e7af-9408-4802-b073-247e8fbe18c2 STEP: Creating a pod to test consume configMaps Jun 26 00:53:57.703: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bede75ba-f561-4380-b201-b707f96e3f11" in namespace "projected-3039" to be "Succeeded or Failed" Jun 26 00:53:57.738: INFO: Pod "pod-projected-configmaps-bede75ba-f561-4380-b201-b707f96e3f11": Phase="Pending", Reason="", readiness=false. Elapsed: 35.419949ms Jun 26 00:53:59.742: INFO: Pod "pod-projected-configmaps-bede75ba-f561-4380-b201-b707f96e3f11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039083552s Jun 26 00:54:01.746: INFO: Pod "pod-projected-configmaps-bede75ba-f561-4380-b201-b707f96e3f11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043428937s STEP: Saw pod success Jun 26 00:54:01.746: INFO: Pod "pod-projected-configmaps-bede75ba-f561-4380-b201-b707f96e3f11" satisfied condition "Succeeded or Failed" Jun 26 00:54:01.749: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-bede75ba-f561-4380-b201-b707f96e3f11 container projected-configmap-volume-test: STEP: delete the pod Jun 26 00:54:01.794: INFO: Waiting for pod pod-projected-configmaps-bede75ba-f561-4380-b201-b707f96e3f11 to disappear Jun 26 00:54:01.809: INFO: Pod pod-projected-configmaps-bede75ba-f561-4380-b201-b707f96e3f11 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:54:01.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3039" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":294,"completed":232,"skipped":3649,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:54:01.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:809 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-8735 Jun 26 00:54:05.894: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8735 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jun 26 00:54:06.247: INFO: stderr: "I0626 00:54:06.049832 2789 log.go:172] (0xc00003a160) (0xc000872460) Create stream\nI0626 00:54:06.049901 2789 log.go:172] (0xc00003a160) (0xc000872460) Stream added, broadcasting: 1\nI0626 00:54:06.052542 2789 log.go:172] (0xc00003a160) Reply frame received for 1\nI0626 00:54:06.052592 2789 log.go:172] (0xc00003a160) (0xc0006ea140) Create stream\nI0626 00:54:06.052608 2789 log.go:172] (0xc00003a160) (0xc0006ea140) Stream added, broadcasting: 3\nI0626 00:54:06.053847 2789 log.go:172] (0xc00003a160) Reply frame received for 3\nI0626 00:54:06.053898 2789 log.go:172] (0xc00003a160) (0xc00088b040) Create stream\nI0626 00:54:06.053915 2789 log.go:172] (0xc00003a160) (0xc00088b040) Stream added, broadcasting: 5\nI0626 00:54:06.054765 2789 log.go:172] (0xc00003a160) Reply frame received for 5\nI0626 00:54:06.141940 2789 log.go:172] (0xc00003a160) Data frame received for 5\nI0626 00:54:06.141966 2789 log.go:172] (0xc00088b040) (5) Data frame handling\nI0626 00:54:06.141982 2789 log.go:172] (0xc00088b040) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0626 00:54:06.235841 2789 log.go:172] (0xc00003a160) Data frame received for 3\nI0626 00:54:06.235886 2789 log.go:172] (0xc0006ea140) (3) Data frame handling\nI0626 00:54:06.235919 2789 log.go:172] (0xc0006ea140) (3) Data frame sent\nI0626 00:54:06.236141 2789 log.go:172] (0xc00003a160) Data frame received for 5\nI0626 00:54:06.236161 2789 log.go:172] (0xc00088b040) (5) Data frame handling\nI0626 00:54:06.236465 2789 log.go:172] (0xc00003a160) Data frame received for 3\nI0626 00:54:06.236490 2789 log.go:172] (0xc0006ea140) (3) Data frame handling\nI0626 00:54:06.238515 2789 log.go:172] (0xc00003a160) Data frame received for 1\nI0626 00:54:06.238543 2789 log.go:172] (0xc000872460) (1) Data frame handling\nI0626 00:54:06.238559 2789 log.go:172] (0xc000872460) (1) Data frame sent\nI0626 00:54:06.238592 2789 log.go:172] (0xc00003a160) (0xc000872460) Stream removed, broadcasting: 1\nI0626 00:54:06.238685 2789 log.go:172] (0xc00003a160) Go away received\nI0626 00:54:06.239041 2789 log.go:172] (0xc00003a160) (0xc000872460) Stream removed, broadcasting: 1\nI0626 00:54:06.239087 2789 log.go:172] (0xc00003a160) (0xc0006ea140) Stream removed, broadcasting: 3\nI0626 00:54:06.239121 2789 log.go:172] (0xc00003a160) (0xc00088b040) Stream removed, broadcasting: 5\n" Jun 26 00:54:06.247: INFO: stdout: "iptables" Jun 26 00:54:06.247: INFO: proxyMode: iptables Jun 26 00:54:06.252: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 26 00:54:06.308: INFO: Pod kube-proxy-mode-detector still exists Jun 26 00:54:08.309: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 26 00:54:08.313: INFO: Pod kube-proxy-mode-detector still exists Jun 26 00:54:10.309: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 26 00:54:10.313: INFO: Pod kube-proxy-mode-detector still exists Jun 26 00:54:12.309: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 26 00:54:12.313: INFO: Pod kube-proxy-mode-detector still exists Jun 26 00:54:14.309: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 26 00:54:14.314: INFO: Pod kube-proxy-mode-detector still exists Jun 26 00:54:16.309: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 26 00:54:16.313: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-8735 STEP: creating replication controller affinity-nodeport-timeout in namespace services-8735 I0626 00:54:16.440390 8 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-8735, replica count: 3 I0626 00:54:19.490803 8 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0626 00:54:22.491096 8 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 26 00:54:22.503: INFO: Creating new exec pod Jun 26 00:54:27.524: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8735 execpod-affinity7cc7d -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' Jun 26 00:54:27.743: INFO: stderr: "I0626 00:54:27.658570 2809 log.go:172] (0xc000202fd0) (0xc0006b9220) Create stream\nI0626 00:54:27.658608 2809 log.go:172] (0xc000202fd0) (0xc0006b9220) Stream added, broadcasting: 1\nI0626 00:54:27.661079 2809 log.go:172] (0xc000202fd0) Reply frame received for 1\nI0626 00:54:27.661342 2809 log.go:172] (0xc000202fd0) (0xc0004270e0) Create stream\nI0626 00:54:27.661392 2809 log.go:172] (0xc000202fd0) (0xc0004270e0) Stream added, broadcasting: 3\nI0626 00:54:27.663243 2809 log.go:172] (0xc000202fd0) Reply frame received for 3\nI0626 00:54:27.663285 2809 log.go:172] (0xc000202fd0) (0xc0002c7c20) Create stream\nI0626 00:54:27.663298 2809 log.go:172] (0xc000202fd0) (0xc0002c7c20) Stream added, broadcasting: 5\nI0626 00:54:27.664191 2809 log.go:172] (0xc000202fd0) Reply frame received for 5\nI0626 00:54:27.725348 2809 log.go:172] (0xc000202fd0) Data frame received for 5\nI0626 00:54:27.725374 2809 log.go:172] (0xc0002c7c20) (5) Data frame handling\nI0626 00:54:27.725387 2809 log.go:172] (0xc0002c7c20) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nI0626 00:54:27.735122 2809 log.go:172] (0xc000202fd0) Data frame received for 5\nI0626 00:54:27.735150 2809 log.go:172] (0xc0002c7c20) (5) Data frame handling\nI0626 00:54:27.735163 2809 log.go:172] (0xc0002c7c20) (5) Data frame sent\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0626 00:54:27.735209 2809 log.go:172] (0xc000202fd0) Data frame received for 3\nI0626 00:54:27.735224 2809 log.go:172] (0xc0004270e0) (3) Data frame handling\nI0626 00:54:27.735288 2809 log.go:172] (0xc000202fd0) Data frame received for 5\nI0626 00:54:27.735302 2809 log.go:172] (0xc0002c7c20) (5) Data frame handling\nI0626 00:54:27.736952 2809 log.go:172] (0xc000202fd0) Data frame received for 1\nI0626 00:54:27.736994 2809 log.go:172] (0xc0006b9220) (1) Data frame handling\nI0626 00:54:27.737014 2809 log.go:172] (0xc0006b9220) (1) Data frame sent\nI0626 00:54:27.737039 2809 log.go:172] (0xc000202fd0) (0xc0006b9220) Stream removed, broadcasting: 1\nI0626 00:54:27.737082 2809 log.go:172] (0xc000202fd0) Go away received\nI0626 00:54:27.737601 2809 log.go:172] (0xc000202fd0) (0xc0006b9220) Stream removed, broadcasting: 1\nI0626 00:54:27.737626 2809 log.go:172] (0xc000202fd0) (0xc0004270e0) Stream removed, broadcasting: 3\nI0626 00:54:27.737646 2809 log.go:172] (0xc000202fd0) (0xc0002c7c20) Stream removed, broadcasting: 5\n" Jun 26 00:54:27.744: INFO: stdout: "" Jun 26 00:54:27.745: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8735 execpod-affinity7cc7d -- /bin/sh -x -c nc -zv -t -w 2 10.102.157.66 80' Jun 26 00:54:27.981: INFO: stderr: "I0626 00:54:27.883647 2826 log.go:172] (0xc000a9e000) (0xc000894d20) Create stream\nI0626 00:54:27.883699 2826 log.go:172] (0xc000a9e000) (0xc000894d20) Stream added, broadcasting: 1\nI0626 00:54:27.886171 2826 log.go:172] (0xc000a9e000) Reply frame received for 1\nI0626 00:54:27.886243 2826 log.go:172] (0xc000a9e000) (0xc00070eaa0) Create stream\nI0626 00:54:27.886276 2826 log.go:172] (0xc000a9e000) (0xc00070eaa0) Stream added, broadcasting: 3\nI0626 00:54:27.887406 2826 log.go:172] (0xc000a9e000) Reply frame received for 3\nI0626 00:54:27.887449 2826 log.go:172] (0xc000a9e000) (0xc0005e6b40) Create stream\nI0626 00:54:27.887471 2826 log.go:172] (0xc000a9e000) (0xc0005e6b40) Stream added, broadcasting: 5\nI0626 00:54:27.888435 2826 log.go:172] (0xc000a9e000) Reply frame received for 5\nI0626 00:54:27.972632 2826 log.go:172] (0xc000a9e000) Data frame received for 3\nI0626 00:54:27.972674 2826 log.go:172] (0xc00070eaa0) (3) Data frame handling\nI0626 00:54:27.972885 2826 log.go:172] (0xc000a9e000) Data frame received for 5\nI0626 00:54:27.972908 2826 log.go:172] (0xc0005e6b40) (5) Data frame handling\nI0626 00:54:27.972929 2826 log.go:172] (0xc0005e6b40) (5) Data frame sent\nI0626 00:54:27.972943 2826 log.go:172] (0xc000a9e000) Data frame received for 5\n+ nc -zv -t -w 2 10.102.157.66 80\nConnection to 10.102.157.66 80 port [tcp/http] succeeded!\nI0626 00:54:27.972954 2826 log.go:172] (0xc0005e6b40) (5) Data frame handling\nI0626 00:54:27.974637 2826 log.go:172] (0xc000a9e000) Data frame received for 1\nI0626 00:54:27.974675 2826 log.go:172] (0xc000894d20) (1) Data frame handling\nI0626 00:54:27.974698 2826 log.go:172] (0xc000894d20) (1) Data frame sent\nI0626 00:54:27.974731 2826 log.go:172] (0xc000a9e000) (0xc000894d20) Stream removed, broadcasting: 1\nI0626 00:54:27.974830 2826 log.go:172] (0xc000a9e000) Go away received\nI0626 00:54:27.975191 2826 log.go:172] (0xc000a9e000) (0xc000894d20) Stream removed, broadcasting: 1\nI0626 00:54:27.975217 2826 log.go:172] (0xc000a9e000) (0xc00070eaa0) Stream removed, broadcasting: 3\nI0626 00:54:27.975237 2826 log.go:172] (0xc000a9e000) (0xc0005e6b40) Stream removed, broadcasting: 5\n" Jun 26 00:54:27.981: INFO: stdout: "" Jun 26 00:54:27.982: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8735 execpod-affinity7cc7d -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31825' Jun 26 00:54:28.183: INFO: stderr: "I0626 00:54:28.114926 2847 log.go:172] (0xc000782840) (0xc000358e60) Create stream\nI0626 00:54:28.114979 2847 log.go:172] (0xc000782840) (0xc000358e60) Stream added, broadcasting: 1\nI0626 00:54:28.117612 2847 log.go:172] (0xc000782840) Reply frame received for 1\nI0626 00:54:28.117658 2847 log.go:172] (0xc000782840) (0xc00013a000) Create stream\nI0626 00:54:28.117673 2847 log.go:172] (0xc000782840) (0xc00013a000) Stream added, broadcasting: 3\nI0626 00:54:28.118754 2847 log.go:172] (0xc000782840) Reply frame received for 3\nI0626 00:54:28.118790 2847 log.go:172] (0xc000782840) (0xc0002a6280) Create stream\nI0626 00:54:28.118800 2847 log.go:172] (0xc000782840) (0xc0002a6280) Stream added, broadcasting: 5\nI0626 00:54:28.119771 2847 log.go:172] (0xc000782840) Reply frame received for 5\nI0626 00:54:28.176693 2847 log.go:172] (0xc000782840) Data frame received for 3\nI0626 00:54:28.176749 2847 log.go:172] (0xc00013a000) (3) Data frame handling\nI0626 00:54:28.176815 2847 log.go:172] (0xc000782840) Data frame received for 5\nI0626 00:54:28.176850 2847 log.go:172] (0xc0002a6280) (5) Data frame handling\nI0626 00:54:28.176882 2847 log.go:172] (0xc0002a6280) (5) Data frame sent\nI0626 00:54:28.176901 2847 log.go:172] (0xc000782840) Data frame received for 5\nI0626 00:54:28.176934 2847 log.go:172] (0xc0002a6280) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31825\nConnection to 172.17.0.13 31825 port [tcp/31825] succeeded!\nI0626 00:54:28.179022 2847 log.go:172] (0xc000782840) Data frame received for 1\nI0626 00:54:28.179064 2847 log.go:172] (0xc000358e60) (1) Data frame handling\nI0626 00:54:28.179120 2847 log.go:172] (0xc000358e60) (1) Data frame sent\nI0626 00:54:28.179164 2847 log.go:172] (0xc000782840) (0xc000358e60) Stream removed, broadcasting: 1\nI0626 00:54:28.179191 2847 log.go:172] (0xc000782840) Go away received\nI0626 00:54:28.179603 2847 log.go:172] (0xc000782840) (0xc000358e60) Stream removed, broadcasting: 1\nI0626 00:54:28.179629 2847 log.go:172] (0xc000782840) (0xc00013a000) Stream removed, broadcasting: 3\nI0626 00:54:28.179644 2847 log.go:172] (0xc000782840) (0xc0002a6280) Stream removed, broadcasting: 5\n" Jun 26 00:54:28.183: INFO: stdout: "" Jun 26 00:54:28.183: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8735 execpod-affinity7cc7d -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31825' Jun 26 00:54:28.398: INFO: stderr: "I0626 00:54:28.332459 2870 log.go:172] (0xc0009736b0) (0xc0006c59a0) Create stream\nI0626 00:54:28.332541 2870 log.go:172] (0xc0009736b0) (0xc0006c59a0) Stream added, broadcasting: 1\nI0626 00:54:28.336229 2870 log.go:172] (0xc0009736b0) Reply frame received for 1\nI0626 00:54:28.336293 2870 log.go:172] (0xc0009736b0) (0xc0006e0460) Create stream\nI0626 00:54:28.336317 2870 log.go:172] (0xc0009736b0) (0xc0006e0460) Stream added, broadcasting: 3\nI0626 00:54:28.337587 2870 log.go:172] (0xc0009736b0) Reply frame received for 3\nI0626 00:54:28.337619 2870 log.go:172] (0xc0009736b0) (0xc00058cbe0) Create stream\nI0626 00:54:28.337629 2870 log.go:172] (0xc0009736b0) (0xc00058cbe0) Stream added, broadcasting: 5\nI0626 00:54:28.338863 2870 log.go:172] (0xc0009736b0) Reply frame received for 5\nI0626 00:54:28.391161 2870 log.go:172] (0xc0009736b0) Data frame received for 3\nI0626 00:54:28.391213 2870 log.go:172] (0xc0006e0460) (3) Data frame handling\nI0626 00:54:28.391248 2870 log.go:172] (0xc0009736b0) Data frame received for 5\nI0626 00:54:28.391267 2870 log.go:172] (0xc00058cbe0) (5) Data frame handling\nI0626 00:54:28.391284 2870 log.go:172] (0xc00058cbe0) (5) Data frame sent\nI0626 00:54:28.391294 2870 log.go:172] (0xc0009736b0) Data frame received for 5\nI0626 00:54:28.391304 2870 log.go:172] (0xc00058cbe0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 31825\nConnection to 172.17.0.12 31825 port [tcp/31825] succeeded!\nI0626 00:54:28.392796 2870 log.go:172] (0xc0009736b0) Data frame received for 1\nI0626 00:54:28.392825 2870 log.go:172] (0xc0006c59a0) (1) Data frame handling\nI0626 00:54:28.392849 2870 log.go:172] (0xc0006c59a0) (1) Data frame sent\nI0626 00:54:28.392867 2870 log.go:172] (0xc0009736b0) (0xc0006c59a0) Stream removed, broadcasting: 1\nI0626 00:54:28.392971 2870 log.go:172] (0xc0009736b0) Go away received\nI0626 00:54:28.393485 2870 log.go:172] (0xc0009736b0) (0xc0006c59a0) Stream removed, broadcasting: 1\nI0626 00:54:28.393516 2870 log.go:172] (0xc0009736b0) (0xc0006e0460) Stream removed, broadcasting: 3\nI0626 00:54:28.393533 2870 log.go:172] (0xc0009736b0) (0xc00058cbe0) Stream removed, broadcasting: 5\n" Jun 26 00:54:28.398: INFO: stdout: "" Jun 26 00:54:28.398: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8735 execpod-affinity7cc7d -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31825/ ; done' Jun 26 00:54:28.685: INFO: stderr: "I0626 00:54:28.543030 2889 log.go:172] (0xc0000e8420) (0xc0002f01e0) Create stream\nI0626 00:54:28.543081 2889 log.go:172] (0xc0000e8420) (0xc0002f01e0) Stream added, broadcasting: 1\nI0626 00:54:28.545966 2889 log.go:172] (0xc0000e8420) Reply frame received for 1\nI0626 00:54:28.546028 2889 log.go:172] (0xc0000e8420) (0xc0002f0820) Create stream\nI0626 00:54:28.546047 2889 log.go:172] (0xc0000e8420) (0xc0002f0820) Stream added, broadcasting: 3\nI0626 00:54:28.547281 2889 log.go:172] (0xc0000e8420) Reply frame received for 3\nI0626 00:54:28.547321 2889 log.go:172] (0xc0000e8420) (0xc0002f0f00) Create stream\nI0626 00:54:28.547332 2889 log.go:172] (0xc0000e8420) (0xc0002f0f00) Stream added, broadcasting: 5\nI0626 00:54:28.548302 2889 log.go:172] (0xc0000e8420) Reply frame received for 5\nI0626 00:54:28.589859 2889 log.go:172] (0xc0000e8420) Data frame received for 5\nI0626 00:54:28.589912 2889 log.go:172] (0xc0000e8420) Data frame received for 3\nI0626 00:54:28.589945 2889 log.go:172] (0xc0002f0820) (3) Data frame handling\nI0626 00:54:28.589962 2889 log.go:172] (0xc0002f0820) (3) Data frame sent\nI0626 00:54:28.589986 2889 log.go:172] (0xc0002f0f00) (5) Data frame handling\nI0626 00:54:28.589996 2889 log.go:172] (0xc0002f0f00) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31825/\nI0626 00:54:28.595055 2889 log.go:172] (0xc0000e8420) Data frame received for 3\nI0626 00:54:28.595075 2889 log.go:172] (0xc0002f0820) (3) Data frame handling\nI0626 00:54:28.595098 2889 log.go:172] (0xc0002f0820) (3) Data frame sent\nI0626 00:54:28.595707 2889 log.go:172] (0xc0000e8420) Data frame received for 3\nI0626 00:54:28.595751 2889 log.go:172] (0xc0002f0820) (3) Data frame handling\nI0626 00:54:28.595780 2889 log.go:172] (0xc0002f0820) (3) Data frame sent\nI0626 00:54:28.595806 2889 log.go:172] (0xc0000e8420) Data frame received for 5\nI0626 00:54:28.595817 2889 log.go:172] (0xc0002f0f00) (5) Data frame handling\nI0626 00:54:28.595833 2889 log.go:172] (0xc0002f0f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31825/\nI0626 00:54:28.600519 2889 log.go:172] (0xc0000e8420) Data frame received for 3\nI0626 00:54:28.600542 2889 log.go:172] (0xc0002f0820) (3) Data frame handling\nI0626 00:54:28.600563 2889 log.go:172] (0xc0002f0820) (3) Data frame sent\nI0626 00:54:28.601704 2889 log.go:172] (0xc0000e8420) Data frame received for 3\nI0626 00:54:28.601744 2889 log.go:172] (0xc0002f0820) (3) Data frame handling\nI0626 00:54:28.601765 2889 log.go:172] (0xc0002f0820) (3) Data frame sent\nI0626 00:54:28.601788 2889 log.go:172] (0xc0000e8420) Data frame received for 5\nI0626 00:54:28.601807 2889 log.go:172] (0xc0002f0f00) (5) Data frame handling\nI0626 00:54:28.601830 2889 log.go:172] (0xc0002f0f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31825/\nI0626 00:54:28.604821 2889 log.go:172] (0xc0000e8420) Data frame received for 3\nI0626 00:54:28.604847 2889 log.go:172] (0xc0002f0820) (3) Data frame handling\nI0626 00:54:28.604870 2889 log.go:172] (0xc0002f0820) (3) Data frame sent\nI0626 00:54:28.605762 2889 log.go:172] (0xc0000e8420) Data frame received for 5\nI0626 00:54:28.605793 2889 log.go:172] (0xc0002f0f00) (5) Data frame handling\nI0626 00:54:28.605805 2889 log.go:172] (0xc0002f0f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31825/\nI0626 00:54:28.605822 2889 log.go:172] (0xc0000e8420) Data frame received for 3\nI0626 00:54:28.605834 2889 log.go:172] (0xc0002f0820) (3) Data frame handling\nI0626 00:54:28.605846 2889 log.go:172] (0xc0002f0820) (3) Data frame sent\nI0626 00:54:28.610349 2889 log.go:172] (0xc0000e8420) Data frame received for 3\nI0626 00:54:28.610369 2889 log.go:172] (0xc0002f0820) (3) Data frame handling\nI0626 00:54:28.610387 2889 log.go:172] (0xc0002f0820) (3) Data frame sent\nI0626 00:54:28.610681 2889 log.go:172] (0xc0000e8420) Data frame received for 3\nI0626 00:54:28.610715 2889 log.go:172] (0xc0002f0820) (3) Data frame handling\nI0626 00:54:28.610737 2889 log.go:172] (0xc0002f0820) (3) Data frame sent\nI0626 00:54:28.610768 2889 log.go:172] (0xc0000e8420) Data frame received for 5\nI0626 00:54:28.610788 2889 log.go:172] (0xc0002f0f00) (5) Data frame handling\nI0626 00:54:28.610820 2889 log.go:172] (0xc0002f0f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31825/\nI0626 00:54:28.614948 2889 log.go:172] (0xc0000e8420) Data frame received for 3\nI0626 00:54:28.614980 2889 log.go:172] (0xc0002f0820) (3) Data frame handling\nI0626 00:54:28.615000 2889 log.go:172] (0xc0002f0820) (3) Data frame sent\nI0626 00:54:28.615462 2889 log.go:172] (0xc0000e8420) Data frame received for 5\nI0626 00:54:28.615497 2889 log.go:172] (0xc0002f0f00) (5) Data frame handling\nI0626 00:54:28.615529 2889 log.go:172] (0xc0002f0f00) (5) Data frame sent\n+ I0626 00:54:28.615642 2889 log.go:172] (0xc0000e8420) Data frame received for 5\nI0626 00:54:28.615662 2889 log.go:172] (0xc0002f0f00) (5) Data frame handling\nI0626 00:54:28.615682 2889 log.go:172] (0xc0002f0f00) (5) Data frame sent\nI0626 00:54:28.615693 2889 log.go:172] (0xc0000e8420) Data frame received for 5\nechoI0626 00:54:28.615703 2889 log.go:172] (0xc0002f0f00) (5) Data frame handling\nI0626 00:54:28.615748 2889 log.go:172] (0xc0002f0f00) (5) Data frame sent\n\nI0626 00:54:28.615986 2889 log.go:172] (0xc0000e8420) Data frame received for 5\nI0626 00:54:28.616021 2889 log.go:172] (0xc0002f0f00) (5) Data frame handling\nI0626 00:54:28.616034 2889 log.go:172] (0xc0002f0f00) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31825/\nI0626 00:54:28.616051 2889 log.go:172] (0xc0000e8420) Data frame received for 3\nI0626 00:54:28.616072 2889 log.go:172] (0xc0002f0820) (3) Data frame handling\nI0626 00:54:28.616087 2889 log.go:172] (0xc0002f0820) (3) Data frame sent\nI0626 00:54:28.619303 2889 log.go:172] (0xc0000e8420) Data frame received for 3\nI0626 00:54:28.619327 2889 log.go:172] (0xc0002f0820) (3) Data frame handling\nI0626 00:54:28.619345 2889 log.go:172] (0xc0002f0820) (3) Data frame sent\nI0626 00:54:28.619782 2889 log.go:172] (0xc0000e8420) Data frame received for 3\nI0626 00:54:28.619813 2889 log.go:172] (0xc0002f0820) (3) Data frame handling\nI0626 00:54:28.619842 2889 log.go:172] (0xc0002f0820) (3) Data frame sent\nI0626 00:54:28.619869 2889 log.go:172] (0xc0000e8420) Data frame received for 5\nI0626 00:54:28.619882 2889 log.go:172] (0xc0002f0f00) (5) Data frame handling\nI0626 00:54:28.619901 2889 log.go:172] (0xc0002f0f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31825/\nI0626 00:54:28.627436 2889 log.go:172] (0xc0000e8420) Data frame received for 3\nI0626 00:54:28.627451 2889 log.go:172] (0xc0002f0820) (3) Data frame handling\nI0626 00:54:28.627464 2889 log.go:172] (0xc0002f0820) (3) Data frame sent\nI0626 00:54:28.628263 2889 log.go:172] (0xc0000e8420) Data frame received for 5\nI0626 00:54:28.628291 2889 log.go:172] (0xc0002f0f00) (5) Data frame handling\nI0626 00:54:28.628312 2889 log.go:172] (0xc0002f0f00) (5) Data frame sent\nI0626 00:54:28.628323 2889 log.go:172] (0xc0000e8420) Data frame received for 5\nI0626 00:54:28.628333 2889 log.go:172] (0xc0002f0f00) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31825/\nI0626 00:54:28.628355 2889 log.go:172] (0xc0000e8420) Data frame received for 3\nI0626 00:54:28.628375 2889 log.go:172] (0xc0002f0820) (3) Data frame handling\nI0626 00:54:28.628398 2889 log.go:172] (0xc0002f0f00) (5) Data frame sent\nI0626 00:54:28.628420 2889 log.go:172] (0xc0002f0820) (3) Data frame sent\nI0626 00:54:28.631997 2889 log.go:172] (0xc0000e8420) Data frame received for 3\nI0626 00:54:28.632009 2889 log.go:172] (0xc0002f0820) (3) Data frame handling\nI0626 00:54:28.632015 2889 log.go:172] (0xc0002f0820) (3) Data frame sent\nI0626 00:54:28.632485 2889 log.go:172] (0xc0000e8420) Data frame received for 5\nI0626 00:54:28.632513 2889 log.go:172] (0xc0000e8420) Data frame received for 3\nI0626 00:54:28.632535 2889 log.go:172] (0xc0002f0820) (3) Data frame handling\nI0626 00:54:28.632547 2889 log.go:172] (0xc0002f0f00) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2I0626 00:54:28.632561 2889 log.go:172] (0xc0002f0820) (3) Data frame sent\nI0626 00:54:28.632586 2889 log.go:172] (0xc0002f0f00) (5) Data frame sent\nI0626 00:54:28.632597 2889 log.go:172] (0xc0000e8420) Data frame received for 5\nI0626 00:54:28.632608 2889 log.go:172] (0xc0002f0f00) (5) Data frame handling\nI0626 00:54:28.632624 2889 log.go:172] (0xc0002f0f00) (5) Data frame sent\n http://172.17.0.13:31825/\nI0626 00:54:28.636974 2889 log.go:172] (0xc0000e8420) Data frame received for 3\nI0626 00:54:28.636998 2889 log.go:172] (0xc0002f0820) (3) Data frame handling\nI0626 00:54:28.637011 2889 log.go:172] (0xc0002f0820) (3) Data frame sent\nI0626 00:54:28.637742 2889 log.go:172] (0xc0000e8420) Data frame received for 3\nI0626 00:54:28.637765 2889 log.go:172] (0xc0002f0820) (3) Data frame handling\nI0626 00:54:28.637781 2889 log.go:172] (0xc0000e8420) Data frame received for 5\nI0626 00:54:28.637816 2889 log.go:172] (0xc0002f0f00) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31825/\nI0626 00:54:28.637830 2889 log.go:172] (0xc0002f0820) (3) Data frame sent\nI0626 00:54:28.637846 2889 log.go:172] (0xc0002f0f00) (5) Data frame sent\nI0626 00:54:28.642063 2889 log.go:172] (0xc0000e8420) Data frame received for 3\nI0626 00:54:28.642091 2889 log.go:172] (0xc0002f0820) (3) Data frame handling\nI0626 00:54:28.642121 2889 log.go:172] (0xc0002f0820) (3) Data frame sent\nI0626 00:54:28.642965 2889 log.go:172] (0xc0000e8420) Data frame received for 3\nI0626 00:54:28.642993 2889 log.go:172] (0xc0000e8420) Data frame received for 5\nI0626 00:54:28.643022 2889 log.go:172] (0xc0002f0f00) (5) Data frame handling\nI0626 00:54:28.643038 2889 log.go:172] (0xc0002f0820) (3) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31825/\nI0626 00:54:28.643060 2889 log.go:172] (0xc0002f0820) (3) Data frame sent\nI0626 00:54:28.643082 2889 log.go:172] (0xc0002f0f00) (5) Data frame sent\nI0626 00:54:28.649870 2889 log.go:172] (0xc0000e8420) Data frame received for 5\nI0626 00:54:28.649899 2889 log.go:172] (0xc0002f0f00) (5) Data frame handling\nI0626 00:54:28.649916 2889 log.go:172] (0xc0002f0f00) (5) Data frame sent\nI0626 00:54:28.649929 2889 log.go:172] (0xc0000e8420) Data frame received for 5\nI0626 00:54:28.649944 2889 log.go:172] (0xc0002f0f00) (5) Data frame handling\nI0626 00:54:28.649961 2889 log.go:172] (0xc0000e8420) Data frame received for 3\nI0626 00:54:28.649974 2889 log.go:172] (0xc0002f0820) (3) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31825/\nI0626 00:54:28.649987 2889 log.go:172] (0xc0002f0820) (3) Data frame sent\nI0626 00:54:28.650057 2889 log.go:172] (0xc0002f0f00) (5) Data frame sent\nI0626 00:54:28.650096 2889 log.go:172] (0xc0000e8420) Data frame received for 3\nI0626 00:54:28.650121 2889 log.go:172] (0xc0002f0820) (3) Data frame handling\nI0626 00:54:28.650144 2889 log.go:172] (0xc0002f0820) (3) Data frame sent\nI0626 00:54:28.653304 2889 log.go:172] (0xc0000e8420) Data frame received for 3\nI0626 00:54:28.653327 2889 log.go:172] (0xc0002f0820) (3) Data frame handling\nI0626 00:54:28.653341 2889 log.go:172] (0xc0002f0820) (3) Data frame sent\nI0626 00:54:28.653813 2889 log.go:172] (0xc0000e8420) Data frame received for 5\nI0626 00:54:28.653845 2889 log.go:172] (0xc0002f0f00) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31825/\nI0626 00:54:28.653866 2889 log.go:172] (0xc0000e8420) Data frame received for 3\nI0626 00:54:28.653892 2889 log.go:172] (0xc0002f0820) (3) Data frame handling\nI0626 00:54:28.653912 2889 log.go:172] (0xc0002f0820) (3) Data frame sent\nI0626 00:54:28.653931 2889 log.go:172] (0xc0002f0f00) (5) Data frame sent\nI0626 00:54:28.659033 2889 log.go:172] (0xc0000e8420) Data frame received for 3\nI0626 00:54:28.659058 2889 log.go:172] (0xc0002f0820) (3) Data frame handling\nI0626 00:54:28.659080 2889 log.go:172] (0xc0002f0820) (3) Data frame sent\nI0626 00:54:28.659692 2889 log.go:172] (0xc0000e8420) Data frame received for 3\nI0626 00:54:28.659716 2889 log.go:172] (0xc0002f0820) (3) Data frame handling\nI0626 00:54:28.659729 2889 log.go:172] (0xc0002f0820) (3) Data frame sent\nI0626 00:54:28.659748 2889 log.go:172] (0xc0000e8420) Data frame received for 5\nI0626 00:54:28.659774 2889 log.go:172] (0xc0002f0f00) (5) Data frame handling\nI0626 00:54:28.659800 2889 log.go:172] (0xc0002f0f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31825/\nI0626 00:54:28.665075 2889 log.go:172] (0xc0000e8420) Data frame received for 3\nI0626 00:54:28.665098 2889 log.go:172] (0xc0002f0820) (3) Data frame handling\nI0626 00:54:28.665287 2889 log.go:172] (0xc0002f0820) (3) Data frame sent\nI0626 00:54:28.666098 2889 log.go:172] (0xc0000e8420) Data frame received for 5\nI0626 00:54:28.666119 2889 log.go:172] (0xc0002f0f00) (5) Data frame handling\nI0626 00:54:28.666139 2889 log.go:172] (0xc0002f0f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31825/\nI0626 00:54:28.666232 2889 log.go:172] (0xc0000e8420) Data frame received for 3\nI0626 00:54:28.666251 2889 log.go:172] (0xc0002f0820) (3) Data frame handling\nI0626 00:54:28.666270 2889 log.go:172] (0xc0002f0820) (3) Data frame sent\nI0626 00:54:28.670861 2889 log.go:172] (0xc0000e8420) Data frame received for 3\nI0626 00:54:28.670877 2889 log.go:172] (0xc0002f0820) (3) Data frame handling\nI0626 00:54:28.670885 2889 log.go:172] (0xc0002f0820) (3) Data frame sent\nI0626 00:54:28.671243 2889 log.go:172] (0xc0000e8420) Data frame received for 5\nI0626 00:54:28.671260 2889 log.go:172] (0xc0002f0f00) (5) Data frame handling\nI0626 00:54:28.671270 2889 log.go:172] (0xc0002f0f00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31825/\nI0626 00:54:28.671280 2889 log.go:172] (0xc0000e8420) Data frame received for 3\nI0626 00:54:28.671302 2889 log.go:172] (0xc0002f0820) (3) Data frame handling\nI0626 00:54:28.671328 2889 log.go:172] (0xc0002f0820) (3) Data frame sent\nI0626 00:54:28.676451 2889 log.go:172] (0xc0000e8420) Data frame received for 3\nI0626 00:54:28.676468 2889 log.go:172] (0xc0002f0820) (3) Data frame handling\nI0626 00:54:28.676478 2889 log.go:172] (0xc0002f0820) (3) Data frame sent\nI0626 00:54:28.677094 2889 log.go:172] (0xc0000e8420) Data frame received for 5\nI0626 00:54:28.677290 2889 log.go:172] (0xc0002f0f00) (5) Data frame handling\nI0626 00:54:28.677673 2889 log.go:172] (0xc0000e8420) Data frame received for 3\nI0626 00:54:28.677696 2889 log.go:172] (0xc0002f0820) (3) Data frame handling\nI0626 00:54:28.679608 2889 log.go:172] (0xc0000e8420) Data frame received for 1\nI0626 00:54:28.679634 2889 log.go:172] (0xc0002f01e0) (1) Data frame handling\nI0626 00:54:28.679657 2889 log.go:172] (0xc0002f01e0) (1) Data frame sent\nI0626 00:54:28.679679 2889 log.go:172] (0xc0000e8420) (0xc0002f01e0) Stream removed, broadcasting: 1\nI0626 00:54:28.679865 2889 log.go:172] (0xc0000e8420) Go away received\nI0626 00:54:28.680062 2889 log.go:172] (0xc0000e8420) (0xc0002f01e0) Stream removed, broadcasting: 1\nI0626 00:54:28.680084 2889 log.go:172] (0xc0000e8420) (0xc0002f0820) Stream removed, broadcasting: 3\nI0626 00:54:28.680096 2889 log.go:172] (0xc0000e8420) (0xc0002f0f00) Stream removed, broadcasting: 5\n" Jun 26 00:54:28.686: INFO: stdout: "\naffinity-nodeport-timeout-wqnxn\naffinity-nodeport-timeout-wqnxn\naffinity-nodeport-timeout-wqnxn\naffinity-nodeport-timeout-wqnxn\naffinity-nodeport-timeout-wqnxn\naffinity-nodeport-timeout-wqnxn\naffinity-nodeport-timeout-wqnxn\naffinity-nodeport-timeout-wqnxn\naffinity-nodeport-timeout-wqnxn\naffinity-nodeport-timeout-wqnxn\naffinity-nodeport-timeout-wqnxn\naffinity-nodeport-timeout-wqnxn\naffinity-nodeport-timeout-wqnxn\naffinity-nodeport-timeout-wqnxn\naffinity-nodeport-timeout-wqnxn\naffinity-nodeport-timeout-wqnxn" Jun 26 00:54:28.686: INFO: Received response from host: Jun 26 00:54:28.686: INFO: Received response from host: affinity-nodeport-timeout-wqnxn Jun 26 00:54:28.686: INFO: Received response from host: affinity-nodeport-timeout-wqnxn Jun 26 00:54:28.686: INFO: Received response from host: affinity-nodeport-timeout-wqnxn Jun 26 00:54:28.686: INFO: Received response from host: affinity-nodeport-timeout-wqnxn Jun 26 00:54:28.686: INFO: Received response from host: affinity-nodeport-timeout-wqnxn Jun 26 00:54:28.686: INFO: Received response from host: affinity-nodeport-timeout-wqnxn Jun 26 00:54:28.686: INFO: Received response from host: affinity-nodeport-timeout-wqnxn Jun 26 00:54:28.686: INFO: Received response from host: affinity-nodeport-timeout-wqnxn Jun 26 00:54:28.686: INFO: Received response from host: affinity-nodeport-timeout-wqnxn Jun 26 00:54:28.686: INFO: Received response from host: affinity-nodeport-timeout-wqnxn Jun 26 00:54:28.686: INFO: Received response from host: affinity-nodeport-timeout-wqnxn Jun 26 00:54:28.686: INFO: Received response from host: affinity-nodeport-timeout-wqnxn Jun 26 00:54:28.686: INFO: Received response from host: affinity-nodeport-timeout-wqnxn Jun 26 00:54:28.686: INFO: Received response from host: affinity-nodeport-timeout-wqnxn Jun 26 00:54:28.686: INFO: Received response from host: affinity-nodeport-timeout-wqnxn Jun 26 00:54:28.686: INFO: Received response from host: affinity-nodeport-timeout-wqnxn Jun 26 00:54:28.686: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8735 execpod-affinity7cc7d -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:31825/' Jun 26 00:54:28.899: INFO: stderr: "I0626 00:54:28.826494 2910 log.go:172] (0xc000c15600) (0xc0008508c0) Create stream\nI0626 00:54:28.826557 2910 log.go:172] (0xc000c15600) (0xc0008508c0) Stream added, broadcasting: 1\nI0626 00:54:28.829732 2910 log.go:172] (0xc000c15600) Reply frame received for 1\nI0626 00:54:28.829773 2910 log.go:172] (0xc000c15600) (0xc0006e57c0) Create stream\nI0626 00:54:28.829786 2910 log.go:172] (0xc000c15600) (0xc0006e57c0) Stream added, broadcasting: 3\nI0626 00:54:28.830677 2910 log.go:172] (0xc000c15600) Reply frame received for 3\nI0626 00:54:28.830728 2910 log.go:172] (0xc000c15600) (0xc000850dc0) Create stream\nI0626 00:54:28.830761 2910 log.go:172] (0xc000c15600) (0xc000850dc0) Stream added, broadcasting: 5\nI0626 00:54:28.831823 2910 log.go:172] (0xc000c15600) Reply frame received for 5\nI0626 00:54:28.884942 2910 log.go:172] (0xc000c15600) Data frame received for 5\nI0626 00:54:28.884980 2910 log.go:172] (0xc000850dc0) (5) Data frame handling\nI0626 00:54:28.885006 2910 log.go:172] (0xc000850dc0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31825/\nI0626 00:54:28.890257 2910 log.go:172] (0xc000c15600) Data frame received for 3\nI0626 00:54:28.890287 2910 log.go:172] (0xc0006e57c0) (3) Data frame handling\nI0626 00:54:28.890315 2910 log.go:172] (0xc0006e57c0) (3) Data frame sent\nI0626 00:54:28.890811 2910 log.go:172] (0xc000c15600) Data frame received for 5\nI0626 00:54:28.890839 2910 log.go:172] (0xc000850dc0) (5) Data frame handling\nI0626 00:54:28.891116 2910 log.go:172] (0xc000c15600) Data frame received for 3\nI0626 00:54:28.891151 2910 log.go:172] (0xc0006e57c0) (3) Data frame handling\nI0626 00:54:28.892473 2910 log.go:172] (0xc000c15600) Data frame received for 1\nI0626 00:54:28.892506 2910 log.go:172] (0xc0008508c0) (1) Data frame handling\nI0626 00:54:28.892534 2910 log.go:172] (0xc0008508c0) (1) Data frame sent\nI0626 00:54:28.892666 2910 log.go:172] (0xc000c15600) (0xc0008508c0) Stream removed, broadcasting: 1\nI0626 00:54:28.892703 2910 log.go:172] (0xc000c15600) Go away received\nI0626 00:54:28.893006 2910 log.go:172] (0xc000c15600) (0xc0008508c0) Stream removed, broadcasting: 1\nI0626 00:54:28.893024 2910 log.go:172] (0xc000c15600) (0xc0006e57c0) Stream removed, broadcasting: 3\nI0626 00:54:28.893032 2910 log.go:172] (0xc000c15600) (0xc000850dc0) Stream removed, broadcasting: 5\n" Jun 26 00:54:28.899: INFO: stdout: "affinity-nodeport-timeout-wqnxn" Jun 26 00:54:43.900: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8735 execpod-affinity7cc7d -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:31825/' Jun 26 00:54:44.134: INFO: stderr: "I0626 00:54:44.040427 2930 log.go:172] (0xc000b4b600) (0xc000c2c320) Create stream\nI0626 00:54:44.040488 2930 log.go:172] (0xc000b4b600) (0xc000c2c320) Stream added, broadcasting: 1\nI0626 00:54:44.045740 2930 log.go:172] (0xc000b4b600) Reply frame received for 1\nI0626 00:54:44.045799 2930 log.go:172] (0xc000b4b600) (0xc00085c320) Create stream\nI0626 00:54:44.045818 2930 log.go:172] (0xc000b4b600) (0xc00085c320) Stream added, broadcasting: 3\nI0626 00:54:44.047030 2930 log.go:172] (0xc000b4b600) Reply frame received for 3\nI0626 00:54:44.047068 2930 log.go:172] (0xc000b4b600) (0xc000564aa0) Create stream\nI0626 00:54:44.047084 2930 log.go:172] (0xc000b4b600) (0xc000564aa0) Stream added, broadcasting: 5\nI0626 00:54:44.049648 2930 log.go:172] (0xc000b4b600) Reply frame received for 5\nI0626 00:54:44.122730 2930 log.go:172] (0xc000b4b600) Data frame received for 5\nI0626 00:54:44.122757 2930 log.go:172] (0xc000564aa0) (5) Data frame handling\nI0626 00:54:44.122774 2930 log.go:172] (0xc000564aa0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31825/\nI0626 00:54:44.125099 2930 log.go:172] (0xc000b4b600) Data frame received for 3\nI0626 00:54:44.125309 2930 log.go:172] (0xc00085c320) (3) Data frame handling\nI0626 00:54:44.125328 2930 log.go:172] (0xc00085c320) (3) Data frame sent\nI0626 00:54:44.126112 2930 log.go:172] (0xc000b4b600) Data frame received for 5\nI0626 00:54:44.126178 2930 log.go:172] (0xc000564aa0) (5) Data frame handling\nI0626 00:54:44.126214 2930 log.go:172] (0xc000b4b600) Data frame received for 3\nI0626 00:54:44.126333 2930 log.go:172] (0xc00085c320) (3) Data frame handling\nI0626 00:54:44.127865 2930 log.go:172] (0xc000b4b600) Data frame received for 1\nI0626 00:54:44.127891 2930 log.go:172] (0xc000c2c320) (1) Data frame handling\nI0626 00:54:44.127916 2930 log.go:172] (0xc000c2c320) (1) Data frame sent\nI0626 00:54:44.127941 2930 log.go:172] (0xc000b4b600) (0xc000c2c320) Stream removed, broadcasting: 1\nI0626 00:54:44.127982 2930 log.go:172] (0xc000b4b600) Go away received\nI0626 00:54:44.128426 2930 log.go:172] (0xc000b4b600) (0xc000c2c320) Stream removed, broadcasting: 1\nI0626 00:54:44.128448 2930 log.go:172] (0xc000b4b600) (0xc00085c320) Stream removed, broadcasting: 3\nI0626 00:54:44.128460 2930 log.go:172] (0xc000b4b600) (0xc000564aa0) Stream removed, broadcasting: 5\n" Jun 26 00:54:44.134: INFO: stdout: "affinity-nodeport-timeout-w4v98" Jun 26 00:54:44.134: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-8735, will wait for the garbage collector to delete the pods Jun 26 00:54:44.214: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 6.764045ms Jun 26 00:54:44.715: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 500.354589ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:54:54.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8735" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:813 • [SLOW TEST:53.157 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":294,"completed":233,"skipped":3687,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:54:54.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 26 00:54:55.052: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jun 26 00:54:57.969: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-587 create -f -' Jun 26 00:55:01.106: INFO: stderr: "" Jun 26 00:55:01.106: INFO: stdout: "e2e-test-crd-publish-openapi-5050-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jun 26 00:55:01.106: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-587 delete e2e-test-crd-publish-openapi-5050-crds test-cr' Jun 26 00:55:01.272: INFO: stderr: "" Jun 26 00:55:01.272: INFO: stdout: "e2e-test-crd-publish-openapi-5050-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Jun 26 00:55:01.272: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-587 apply -f -' Jun 26 00:55:01.598: INFO: stderr: "" Jun 26 00:55:01.598: INFO: stdout: "e2e-test-crd-publish-openapi-5050-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jun 26 00:55:01.599: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-587 delete e2e-test-crd-publish-openapi-5050-crds test-cr' Jun 26 00:55:01.713: INFO: stderr: "" Jun 26 00:55:01.713: INFO: stdout: "e2e-test-crd-publish-openapi-5050-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jun 26 00:55:01.713: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5050-crds' Jun 26 00:55:01.963: INFO: stderr: "" Jun 26 00:55:01.963: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5050-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:55:03.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-587" for this suite. • [SLOW TEST:8.949 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":294,"completed":234,"skipped":3708,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:55:03.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-db2dea59-443d-4b56-8b02-d004674f69f8 STEP: Creating a pod to test consume secrets Jun 26 00:55:04.044: INFO: Waiting up to 5m0s for pod "pod-secrets-910db535-ac4e-41e0-8685-3ddd451fe6a4" in namespace "secrets-2453" to be "Succeeded or Failed" Jun 26 00:55:04.050: INFO: Pod "pod-secrets-910db535-ac4e-41e0-8685-3ddd451fe6a4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.79141ms Jun 26 00:55:06.063: INFO: Pod "pod-secrets-910db535-ac4e-41e0-8685-3ddd451fe6a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018958999s Jun 26 00:55:08.069: INFO: Pod "pod-secrets-910db535-ac4e-41e0-8685-3ddd451fe6a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024547513s STEP: Saw pod success Jun 26 00:55:08.069: INFO: Pod "pod-secrets-910db535-ac4e-41e0-8685-3ddd451fe6a4" satisfied condition "Succeeded or Failed" Jun 26 00:55:08.072: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-910db535-ac4e-41e0-8685-3ddd451fe6a4 container secret-volume-test: STEP: delete the pod Jun 26 00:55:08.129: INFO: Waiting for pod pod-secrets-910db535-ac4e-41e0-8685-3ddd451fe6a4 to disappear Jun 26 00:55:08.140: INFO: Pod pod-secrets-910db535-ac4e-41e0-8685-3ddd451fe6a4 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:55:08.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2453" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":235,"skipped":3719,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:55:08.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:809 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-3464 STEP: creating service affinity-clusterip-transition in namespace services-3464 STEP: creating replication controller affinity-clusterip-transition in namespace services-3464 I0626 00:55:08.282515 8 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-3464, replica count: 3 I0626 00:55:11.332971 8 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0626 00:55:14.333413 8 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 26 00:55:14.339: INFO: Creating new exec pod Jun 26 00:55:19.352: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3464 execpod-affinityps9t8 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jun 26 00:55:19.583: INFO: stderr: "I0626 00:55:19.486472 3061 log.go:172] (0xc000b354a0) (0xc000684c80) Create stream\nI0626 00:55:19.486519 3061 log.go:172] (0xc000b354a0) (0xc000684c80) Stream added, broadcasting: 1\nI0626 00:55:19.489698 3061 log.go:172] (0xc000b354a0) Reply frame received for 1\nI0626 00:55:19.489740 3061 log.go:172] (0xc000b354a0) (0xc00063b860) Create stream\nI0626 00:55:19.489754 3061 log.go:172] (0xc000b354a0) (0xc00063b860) Stream added, broadcasting: 3\nI0626 00:55:19.490526 3061 log.go:172] (0xc000b354a0) Reply frame received for 3\nI0626 00:55:19.490581 3061 log.go:172] (0xc000b354a0) (0xc0005560a0) Create stream\nI0626 00:55:19.490611 3061 log.go:172] (0xc000b354a0) (0xc0005560a0) Stream added, broadcasting: 5\nI0626 00:55:19.491410 3061 log.go:172] (0xc000b354a0) Reply frame received for 5\nI0626 00:55:19.574747 3061 log.go:172] (0xc000b354a0) Data frame received for 5\nI0626 00:55:19.574790 3061 log.go:172] (0xc0005560a0) (5) Data frame handling\nI0626 00:55:19.574810 3061 log.go:172] (0xc0005560a0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nI0626 00:55:19.574925 3061 log.go:172] (0xc000b354a0) Data frame received for 5\nI0626 00:55:19.574944 3061 log.go:172] (0xc0005560a0) (5) Data frame handling\nI0626 00:55:19.574957 3061 log.go:172] (0xc0005560a0) (5) Data frame sent\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0626 00:55:19.575345 3061 log.go:172] (0xc000b354a0) Data frame received for 3\nI0626 00:55:19.575373 3061 log.go:172] (0xc00063b860) (3) Data frame handling\nI0626 00:55:19.575400 3061 log.go:172] (0xc000b354a0) Data frame received for 5\nI0626 00:55:19.575417 3061 log.go:172] (0xc0005560a0) (5) Data frame handling\nI0626 00:55:19.577014 3061 log.go:172] (0xc000b354a0) Data frame received for 1\nI0626 00:55:19.577043 3061 log.go:172] (0xc000684c80) (1) Data frame handling\nI0626 00:55:19.577068 3061 log.go:172] (0xc000684c80) (1) Data frame sent\nI0626 00:55:19.577093 3061 log.go:172] (0xc000b354a0) (0xc000684c80) Stream removed, broadcasting: 1\nI0626 00:55:19.577327 3061 log.go:172] (0xc000b354a0) Go away received\nI0626 00:55:19.577471 3061 log.go:172] (0xc000b354a0) (0xc000684c80) Stream removed, broadcasting: 1\nI0626 00:55:19.577486 3061 log.go:172] (0xc000b354a0) (0xc00063b860) Stream removed, broadcasting: 3\nI0626 00:55:19.577491 3061 log.go:172] (0xc000b354a0) (0xc0005560a0) Stream removed, broadcasting: 5\n" Jun 26 00:55:19.583: INFO: stdout: "" Jun 26 00:55:19.584: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3464 execpod-affinityps9t8 -- /bin/sh -x -c nc -zv -t -w 2 10.102.21.223 80' Jun 26 00:55:19.787: INFO: stderr: "I0626 00:55:19.713483 3083 log.go:172] (0xc000616fd0) (0xc000c243c0) Create stream\nI0626 00:55:19.713532 3083 log.go:172] (0xc000616fd0) (0xc000c243c0) Stream added, broadcasting: 1\nI0626 00:55:19.716598 3083 log.go:172] (0xc000616fd0) Reply frame received for 1\nI0626 00:55:19.716633 3083 log.go:172] (0xc000616fd0) (0xc0007741e0) Create stream\nI0626 00:55:19.716642 3083 log.go:172] (0xc000616fd0) (0xc0007741e0) Stream added, broadcasting: 3\nI0626 00:55:19.717552 3083 log.go:172] (0xc000616fd0) Reply frame received for 3\nI0626 00:55:19.717576 3083 log.go:172] (0xc000616fd0) (0xc0006fa280) Create stream\nI0626 00:55:19.717583 3083 log.go:172] (0xc000616fd0) (0xc0006fa280) Stream added, broadcasting: 5\nI0626 00:55:19.718170 3083 log.go:172] (0xc000616fd0) Reply frame received for 5\nI0626 00:55:19.779628 3083 log.go:172] (0xc000616fd0) Data frame received for 3\nI0626 00:55:19.779666 3083 log.go:172] (0xc0007741e0) (3) Data frame handling\nI0626 00:55:19.779691 3083 log.go:172] (0xc000616fd0) Data frame received for 5\nI0626 00:55:19.779712 3083 log.go:172] (0xc0006fa280) (5) Data frame handling\nI0626 00:55:19.779729 3083 log.go:172] (0xc0006fa280) (5) Data frame sent\nI0626 00:55:19.779744 3083 log.go:172] (0xc000616fd0) Data frame received for 5\nI0626 00:55:19.779754 3083 log.go:172] (0xc0006fa280) (5) Data frame handling\n+ nc -zv -t -w 2 10.102.21.223 80\nConnection to 10.102.21.223 80 port [tcp/http] succeeded!\nI0626 00:55:19.781469 3083 log.go:172] (0xc000616fd0) Data frame received for 1\nI0626 00:55:19.781488 3083 log.go:172] (0xc000c243c0) (1) Data frame handling\nI0626 00:55:19.781500 3083 log.go:172] (0xc000c243c0) (1) Data frame sent\nI0626 00:55:19.781512 3083 log.go:172] (0xc000616fd0) (0xc000c243c0) Stream removed, broadcasting: 1\nI0626 00:55:19.781567 3083 log.go:172] (0xc000616fd0) Go away received\nI0626 00:55:19.781813 3083 log.go:172] (0xc000616fd0) (0xc000c243c0) Stream removed, broadcasting: 1\nI0626 00:55:19.781827 3083 log.go:172] (0xc000616fd0) (0xc0007741e0) Stream removed, broadcasting: 3\nI0626 00:55:19.781839 3083 log.go:172] (0xc000616fd0) (0xc0006fa280) Stream removed, broadcasting: 5\n" Jun 26 00:55:19.787: INFO: stdout: "" Jun 26 00:55:19.795: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3464 execpod-affinityps9t8 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.102.21.223:80/ ; done' Jun 26 00:55:20.119: INFO: stderr: "I0626 00:55:19.940758 3104 log.go:172] (0xc000afbad0) (0xc000823220) Create stream\nI0626 00:55:19.940821 3104 log.go:172] (0xc000afbad0) (0xc000823220) Stream added, broadcasting: 1\nI0626 00:55:19.943197 3104 log.go:172] (0xc000afbad0) Reply frame received for 1\nI0626 00:55:19.943226 3104 log.go:172] (0xc000afbad0) (0xc000806a00) Create stream\nI0626 00:55:19.943233 3104 log.go:172] (0xc000afbad0) (0xc000806a00) Stream added, broadcasting: 3\nI0626 00:55:19.944069 3104 log.go:172] (0xc000afbad0) Reply frame received for 3\nI0626 00:55:19.944099 3104 log.go:172] (0xc000afbad0) (0xc000823cc0) Create stream\nI0626 00:55:19.944110 3104 log.go:172] (0xc000afbad0) (0xc000823cc0) Stream added, broadcasting: 5\nI0626 00:55:19.944995 3104 log.go:172] (0xc000afbad0) Reply frame received for 5\nI0626 00:55:20.010488 3104 log.go:172] (0xc000afbad0) Data frame received for 3\nI0626 00:55:20.010543 3104 log.go:172] (0xc000afbad0) Data frame received for 5\nI0626 00:55:20.010595 3104 log.go:172] (0xc000823cc0) (5) Data frame handling\nI0626 00:55:20.010621 3104 log.go:172] (0xc000823cc0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.21.223:80/\nI0626 00:55:20.010656 3104 log.go:172] (0xc000806a00) (3) Data frame handling\nI0626 00:55:20.010689 3104 log.go:172] (0xc000806a00) (3) Data frame sent\nI0626 00:55:20.013738 3104 log.go:172] (0xc000afbad0) Data frame received for 3\nI0626 00:55:20.013773 3104 log.go:172] (0xc000806a00) (3) Data frame handling\nI0626 00:55:20.013801 3104 log.go:172] (0xc000806a00) (3) Data frame sent\nI0626 00:55:20.014514 3104 log.go:172] (0xc000afbad0) Data frame received for 5\nI0626 00:55:20.014541 3104 log.go:172] (0xc000823cc0) (5) Data frame handling\nI0626 00:55:20.014565 3104 log.go:172] (0xc000823cc0) (5) Data frame sent\n+ echo\nI0626 00:55:20.014661 3104 log.go:172] (0xc000afbad0) Data frame received for 5\nI0626 00:55:20.014701 3104 log.go:172] (0xc000823cc0) (5) Data frame handling\nI0626 00:55:20.014721 3104 log.go:172] (0xc000823cc0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.102.21.223:80/\nI0626 00:55:20.014735 3104 log.go:172] (0xc000afbad0) Data frame received for 3\nI0626 00:55:20.014743 3104 log.go:172] (0xc000806a00) (3) Data frame handling\nI0626 00:55:20.014761 3104 log.go:172] (0xc000806a00) (3) Data frame sent\nI0626 00:55:20.022993 3104 log.go:172] (0xc000afbad0) Data frame received for 3\nI0626 00:55:20.023022 3104 log.go:172] (0xc000806a00) (3) Data frame handling\nI0626 00:55:20.023056 3104 log.go:172] (0xc000806a00) (3) Data frame sent\nI0626 00:55:20.023328 3104 log.go:172] (0xc000afbad0) Data frame received for 3\nI0626 00:55:20.023350 3104 log.go:172] (0xc000806a00) (3) Data frame handling\nI0626 00:55:20.023358 3104 log.go:172] (0xc000806a00) (3) Data frame sent\nI0626 00:55:20.023375 3104 log.go:172] (0xc000afbad0) Data frame received for 5\nI0626 00:55:20.023380 3104 log.go:172] (0xc000823cc0) (5) Data frame handling\nI0626 00:55:20.023385 3104 log.go:172] (0xc000823cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.21.223:80/\nI0626 00:55:20.027274 3104 log.go:172] (0xc000afbad0) Data frame received for 3\nI0626 00:55:20.027295 3104 log.go:172] (0xc000806a00) (3) Data frame handling\nI0626 00:55:20.027314 3104 log.go:172] (0xc000806a00) (3) Data frame sent\nI0626 00:55:20.027832 3104 log.go:172] (0xc000afbad0) Data frame received for 3\nI0626 00:55:20.027849 3104 log.go:172] (0xc000afbad0) Data frame received for 5\nI0626 00:55:20.027861 3104 log.go:172] (0xc000823cc0) (5) Data frame handling\nI0626 00:55:20.027868 3104 log.go:172] (0xc000823cc0) (5) Data frame sent\nI0626 00:55:20.027873 3104 log.go:172] (0xc000afbad0) Data frame received for 5\nI0626 00:55:20.027878 3104 log.go:172] (0xc000823cc0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.21.223:80/\nI0626 00:55:20.027887 3104 log.go:172] (0xc000823cc0) (5) Data frame sent\nI0626 00:55:20.027892 3104 log.go:172] (0xc000806a00) (3) Data frame handling\nI0626 00:55:20.027897 3104 log.go:172] (0xc000806a00) (3) Data frame sent\nI0626 00:55:20.031903 3104 log.go:172] (0xc000afbad0) Data frame received for 3\nI0626 00:55:20.031939 3104 log.go:172] (0xc000806a00) (3) Data frame handling\nI0626 00:55:20.031974 3104 log.go:172] (0xc000806a00) (3) Data frame sent\nI0626 00:55:20.032215 3104 log.go:172] (0xc000afbad0) Data frame received for 5\nI0626 00:55:20.032239 3104 log.go:172] (0xc000823cc0) (5) Data frame handling\nI0626 00:55:20.032254 3104 log.go:172] (0xc000823cc0) (5) Data frame sent\nI0626 00:55:20.032271 3104 log.go:172] (0xc000afbad0) Data frame received for 5\nI0626 00:55:20.032292 3104 log.go:172] (0xc000823cc0) (5) Data frame handling\n+ echo\n+ curl -q -sI0626 00:55:20.032308 3104 log.go:172] (0xc000afbad0) Data frame received for 3\n --connect-timeout 2 http://10.102.21.223:80/\nI0626 00:55:20.032321 3104 log.go:172] (0xc000806a00) (3) Data frame handling\nI0626 00:55:20.032359 3104 log.go:172] (0xc000806a00) (3) Data frame sent\nI0626 00:55:20.032386 3104 log.go:172] (0xc000823cc0) (5) Data frame sent\nI0626 00:55:20.036872 3104 log.go:172] (0xc000afbad0) Data frame received for 3\nI0626 00:55:20.036908 3104 log.go:172] (0xc000806a00) (3) Data frame handling\nI0626 00:55:20.036938 3104 log.go:172] (0xc000806a00) (3) Data frame sent\nI0626 00:55:20.037613 3104 log.go:172] (0xc000afbad0) Data frame received for 3\nI0626 00:55:20.037635 3104 log.go:172] (0xc000806a00) (3) Data frame handling\nI0626 00:55:20.037645 3104 log.go:172] (0xc000806a00) (3) Data frame sent\nI0626 00:55:20.037659 3104 log.go:172] (0xc000afbad0) Data frame received for 5\nI0626 00:55:20.037667 3104 log.go:172] (0xc000823cc0) (5) Data frame handling\nI0626 00:55:20.037675 3104 log.go:172] (0xc000823cc0) (5) Data frame sent\nI0626 00:55:20.037683 3104 log.go:172] (0xc000afbad0) Data frame received for 5\nI0626 00:55:20.037693 3104 log.go:172] (0xc000823cc0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.21.223:80/\nI0626 00:55:20.037719 3104 log.go:172] (0xc000823cc0) (5) Data frame sent\nI0626 00:55:20.042116 3104 log.go:172] (0xc000afbad0) Data frame received for 3\nI0626 00:55:20.042148 3104 log.go:172] (0xc000806a00) (3) Data frame handling\nI0626 00:55:20.042172 3104 log.go:172] (0xc000806a00) (3) Data frame sent\nI0626 00:55:20.042857 3104 log.go:172] (0xc000afbad0) Data frame received for 5\nI0626 00:55:20.042894 3104 log.go:172] (0xc000823cc0) (5) Data frame handling\nI0626 00:55:20.042922 3104 log.go:172] (0xc000823cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.21.223:80/\nI0626 00:55:20.042952 3104 log.go:172] (0xc000afbad0) Data frame received for 3\nI0626 00:55:20.042977 3104 log.go:172] (0xc000806a00) (3) Data frame handling\nI0626 00:55:20.043007 3104 log.go:172] (0xc000806a00) (3) Data frame sent\nI0626 00:55:20.050197 3104 log.go:172] (0xc000afbad0) Data frame received for 3\nI0626 00:55:20.050225 3104 log.go:172] (0xc000806a00) (3) Data frame handling\nI0626 00:55:20.050240 3104 log.go:172] (0xc000806a00) (3) Data frame sent\nI0626 00:55:20.050765 3104 log.go:172] (0xc000afbad0) Data frame received for 3\nI0626 00:55:20.050797 3104 log.go:172] (0xc000806a00) (3) Data frame handling\nI0626 00:55:20.050811 3104 log.go:172] (0xc000806a00) (3) Data frame sent\nI0626 00:55:20.050834 3104 log.go:172] (0xc000afbad0) Data frame received for 5\nI0626 00:55:20.050844 3104 log.go:172] (0xc000823cc0) (5) Data frame handling\nI0626 00:55:20.050868 3104 log.go:172] (0xc000823cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.21.223:80/\nI0626 00:55:20.056756 3104 log.go:172] (0xc000afbad0) Data frame received for 3\nI0626 00:55:20.056773 3104 log.go:172] (0xc000806a00) (3) Data frame handling\nI0626 00:55:20.056797 3104 log.go:172] (0xc000806a00) (3) Data frame sent\nI0626 00:55:20.057244 3104 log.go:172] (0xc000afbad0) Data frame received for 5\nI0626 00:55:20.057298 3104 log.go:172] (0xc000823cc0) (5) Data frame handling\nI0626 00:55:20.057307 3104 log.go:172] (0xc000823cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.21.223:80/\nI0626 00:55:20.057412 3104 log.go:172] (0xc000afbad0) Data frame received for 3\nI0626 00:55:20.057425 3104 log.go:172] (0xc000806a00) (3) Data frame handling\nI0626 00:55:20.057445 3104 log.go:172] (0xc000806a00) (3) Data frame sent\nI0626 00:55:20.065501 3104 log.go:172] (0xc000afbad0) Data frame received for 3\nI0626 00:55:20.065526 3104 log.go:172] (0xc000806a00) (3) Data frame handling\nI0626 00:55:20.065545 3104 log.go:172] (0xc000806a00) (3) Data frame sent\nI0626 00:55:20.066289 3104 log.go:172] (0xc000afbad0) Data frame received for 3\nI0626 00:55:20.066315 3104 log.go:172] (0xc000806a00) (3) Data frame handling\nI0626 00:55:20.066340 3104 log.go:172] (0xc000afbad0) Data frame received for 5\nI0626 00:55:20.066357 3104 log.go:172] (0xc000823cc0) (5) Data frame handling\nI0626 00:55:20.066383 3104 log.go:172] (0xc000823cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.21.223:80/\nI0626 00:55:20.066397 3104 log.go:172] (0xc000806a00) (3) Data frame sent\nI0626 00:55:20.070317 3104 log.go:172] (0xc000afbad0) Data frame received for 3\nI0626 00:55:20.070348 3104 log.go:172] (0xc000806a00) (3) Data frame handling\nI0626 00:55:20.070378 3104 log.go:172] (0xc000806a00) (3) Data frame sent\nI0626 00:55:20.070624 3104 log.go:172] (0xc000afbad0) Data frame received for 3\nI0626 00:55:20.070653 3104 log.go:172] (0xc000806a00) (3) Data frame handling\nI0626 00:55:20.070664 3104 log.go:172] (0xc000806a00) (3) Data frame sent\nI0626 00:55:20.070677 3104 log.go:172] (0xc000afbad0) Data frame received for 5\nI0626 00:55:20.070693 3104 log.go:172] (0xc000823cc0) (5) Data frame handling\nI0626 00:55:20.070702 3104 log.go:172] (0xc000823cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.21.223:80/\nI0626 00:55:20.075476 3104 log.go:172] (0xc000afbad0) Data frame received for 3\nI0626 00:55:20.075489 3104 log.go:172] (0xc000806a00) (3) Data frame handling\nI0626 00:55:20.075495 3104 log.go:172] (0xc000806a00) (3) Data frame sent\nI0626 00:55:20.076181 3104 log.go:172] (0xc000afbad0) Data frame received for 3\nI0626 00:55:20.076214 3104 log.go:172] (0xc000806a00) (3) Data frame handling\nI0626 00:55:20.076235 3104 log.go:172] (0xc000806a00) (3) Data frame sent\nI0626 00:55:20.076250 3104 log.go:172] (0xc000afbad0) Data frame received for 5\nI0626 00:55:20.076259 3104 log.go:172] (0xc000823cc0) (5) Data frame handling\nI0626 00:55:20.076268 3104 log.go:172] (0xc000823cc0) (5) Data frame sent\nI0626 00:55:20.076277 3104 log.go:172] (0xc000afbad0) Data frame received for 5\nI0626 00:55:20.076285 3104 log.go:172] (0xc000823cc0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.21.223:80/\nI0626 00:55:20.076306 3104 log.go:172] (0xc000823cc0) (5) Data frame sent\nI0626 00:55:20.080533 3104 log.go:172] (0xc000afbad0) Data frame received for 3\nI0626 00:55:20.080560 3104 log.go:172] (0xc000806a00) (3) Data frame handling\nI0626 00:55:20.080606 3104 log.go:172] (0xc000806a00) (3) Data frame sent\nI0626 00:55:20.081877 3104 log.go:172] (0xc000afbad0) Data frame received for 5\nI0626 00:55:20.081900 3104 log.go:172] (0xc000823cc0) (5) Data frame handling\nI0626 00:55:20.081911 3104 log.go:172] (0xc000823cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.21.223:80/\nI0626 00:55:20.081924 3104 log.go:172] (0xc000afbad0) Data frame received for 3\nI0626 00:55:20.081930 3104 log.go:172] (0xc000806a00) (3) Data frame handling\nI0626 00:55:20.081936 3104 log.go:172] (0xc000806a00) (3) Data frame sent\nI0626 00:55:20.092234 3104 log.go:172] (0xc000afbad0) Data frame received for 3\nI0626 00:55:20.092260 3104 log.go:172] (0xc000806a00) (3) Data frame handling\nI0626 00:55:20.092278 3104 log.go:172] (0xc000806a00) (3) Data frame sent\nI0626 00:55:20.092561 3104 log.go:172] (0xc000afbad0) Data frame received for 3\nI0626 00:55:20.092585 3104 log.go:172] (0xc000806a00) (3) Data frame handling\nI0626 00:55:20.092599 3104 log.go:172] (0xc000806a00) (3) Data frame sent\nI0626 00:55:20.092624 3104 log.go:172] (0xc000afbad0) Data frame received for 5\nI0626 00:55:20.092638 3104 log.go:172] (0xc000823cc0) (5) Data frame handling\nI0626 00:55:20.092651 3104 log.go:172] (0xc000823cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.21.223:80/\nI0626 00:55:20.097364 3104 log.go:172] (0xc000afbad0) Data frame received for 3\nI0626 00:55:20.097389 3104 log.go:172] (0xc000806a00) (3) Data frame handling\nI0626 00:55:20.097400 3104 log.go:172] (0xc000806a00) (3) Data frame sent\nI0626 00:55:20.100984 3104 log.go:172] (0xc000afbad0) Data frame received for 5\nI0626 00:55:20.100996 3104 log.go:172] (0xc000823cc0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.21.223:80/\nI0626 00:55:20.101009 3104 log.go:172] (0xc000afbad0) Data frame received for 3\nI0626 00:55:20.101021 3104 log.go:172] (0xc000806a00) (3) Data frame handling\nI0626 00:55:20.101027 3104 log.go:172] (0xc000806a00) (3) Data frame sent\nI0626 00:55:20.101036 3104 log.go:172] (0xc000823cc0) (5) Data frame sent\nI0626 00:55:20.105332 3104 log.go:172] (0xc000afbad0) Data frame received for 3\nI0626 00:55:20.105361 3104 log.go:172] (0xc000806a00) (3) Data frame handling\nI0626 00:55:20.105372 3104 log.go:172] (0xc000806a00) (3) Data frame sent\nI0626 00:55:20.106077 3104 log.go:172] (0xc000afbad0) Data frame received for 3\nI0626 00:55:20.106145 3104 log.go:172] (0xc000806a00) (3) Data frame handling\nI0626 00:55:20.106161 3104 log.go:172] (0xc000806a00) (3) Data frame sent\nI0626 00:55:20.106176 3104 log.go:172] (0xc000afbad0) Data frame received for 5\nI0626 00:55:20.106184 3104 log.go:172] (0xc000823cc0) (5) Data frame handling\nI0626 00:55:20.106189 3104 log.go:172] (0xc000823cc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.21.223:80/\nI0626 00:55:20.109640 3104 log.go:172] (0xc000afbad0) Data frame received for 3\nI0626 00:55:20.109658 3104 log.go:172] (0xc000806a00) (3) Data frame handling\nI0626 00:55:20.109681 3104 log.go:172] (0xc000806a00) (3) Data frame sent\nI0626 00:55:20.110133 3104 log.go:172] (0xc000afbad0) Data frame received for 5\nI0626 00:55:20.110155 3104 log.go:172] (0xc000823cc0) (5) Data frame handling\nI0626 00:55:20.110172 3104 log.go:172] (0xc000afbad0) Data frame received for 3\nI0626 00:55:20.110178 3104 log.go:172] (0xc000806a00) (3) Data frame handling\nI0626 00:55:20.111444 3104 log.go:172] (0xc000afbad0) Data frame received for 1\nI0626 00:55:20.111506 3104 log.go:172] (0xc000823220) (1) Data frame handling\nI0626 00:55:20.111557 3104 log.go:172] (0xc000823220) (1) Data frame sent\nI0626 00:55:20.111604 3104 log.go:172] (0xc000afbad0) (0xc000823220) Stream removed, broadcasting: 1\nI0626 00:55:20.111628 3104 log.go:172] (0xc000afbad0) Go away received\nI0626 00:55:20.112401 3104 log.go:172] (0xc000afbad0) (0xc000823220) Stream removed, broadcasting: 1\nI0626 00:55:20.112429 3104 log.go:172] (0xc000afbad0) (0xc000806a00) Stream removed, broadcasting: 3\nI0626 00:55:20.112449 3104 log.go:172] (0xc000afbad0) (0xc000823cc0) Stream removed, broadcasting: 5\n" Jun 26 00:55:20.120: INFO: stdout: "\naffinity-clusterip-transition-zbv5d\naffinity-clusterip-transition-wxm8c\naffinity-clusterip-transition-4xzbr\naffinity-clusterip-transition-4xzbr\naffinity-clusterip-transition-wxm8c\naffinity-clusterip-transition-zbv5d\naffinity-clusterip-transition-zbv5d\naffinity-clusterip-transition-zbv5d\naffinity-clusterip-transition-wxm8c\naffinity-clusterip-transition-wxm8c\naffinity-clusterip-transition-zbv5d\naffinity-clusterip-transition-wxm8c\naffinity-clusterip-transition-4xzbr\naffinity-clusterip-transition-4xzbr\naffinity-clusterip-transition-wxm8c\naffinity-clusterip-transition-4xzbr" Jun 26 00:55:20.120: INFO: Received response from host: Jun 26 00:55:20.120: INFO: Received response from host: affinity-clusterip-transition-zbv5d Jun 26 00:55:20.120: INFO: Received response from host: affinity-clusterip-transition-wxm8c Jun 26 00:55:20.120: INFO: Received response from host: affinity-clusterip-transition-4xzbr Jun 26 00:55:20.120: INFO: Received response from host: affinity-clusterip-transition-4xzbr Jun 26 00:55:20.120: INFO: Received response from host: affinity-clusterip-transition-wxm8c Jun 26 00:55:20.120: INFO: Received response from host: affinity-clusterip-transition-zbv5d Jun 26 00:55:20.120: INFO: Received response from host: affinity-clusterip-transition-zbv5d Jun 26 00:55:20.120: INFO: Received response from host: affinity-clusterip-transition-zbv5d Jun 26 00:55:20.120: INFO: Received response from host: affinity-clusterip-transition-wxm8c Jun 26 00:55:20.120: INFO: Received response from host: affinity-clusterip-transition-wxm8c Jun 26 00:55:20.120: INFO: Received response from host: affinity-clusterip-transition-zbv5d Jun 26 00:55:20.120: INFO: Received response from host: affinity-clusterip-transition-wxm8c Jun 26 00:55:20.120: INFO: Received response from host: affinity-clusterip-transition-4xzbr Jun 26 00:55:20.120: INFO: Received response from host: affinity-clusterip-transition-4xzbr Jun 26 00:55:20.120: INFO: Received response from host: affinity-clusterip-transition-wxm8c Jun 26 00:55:20.120: INFO: Received response from host: affinity-clusterip-transition-4xzbr Jun 26 00:55:20.127: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3464 execpod-affinityps9t8 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.102.21.223:80/ ; done' Jun 26 00:55:20.421: INFO: stderr: "I0626 00:55:20.266671 3123 log.go:172] (0xc0000e5080) (0xc0006b19a0) Create stream\nI0626 00:55:20.266798 3123 log.go:172] (0xc0000e5080) (0xc0006b19a0) Stream added, broadcasting: 1\nI0626 00:55:20.269701 3123 log.go:172] (0xc0000e5080) Reply frame received for 1\nI0626 00:55:20.269743 3123 log.go:172] (0xc0000e5080) (0xc0006968c0) Create stream\nI0626 00:55:20.269761 3123 log.go:172] (0xc0000e5080) (0xc0006968c0) Stream added, broadcasting: 3\nI0626 00:55:20.270687 3123 log.go:172] (0xc0000e5080) Reply frame received for 3\nI0626 00:55:20.270717 3123 log.go:172] (0xc0000e5080) (0xc000696dc0) Create stream\nI0626 00:55:20.270732 3123 log.go:172] (0xc0000e5080) (0xc000696dc0) Stream added, broadcasting: 5\nI0626 00:55:20.271559 3123 log.go:172] (0xc0000e5080) Reply frame received for 5\nI0626 00:55:20.328177 3123 log.go:172] (0xc0000e5080) Data frame received for 3\nI0626 00:55:20.328216 3123 log.go:172] (0xc0006968c0) (3) Data frame handling\nI0626 00:55:20.328227 3123 log.go:172] (0xc0006968c0) (3) Data frame sent\nI0626 00:55:20.328244 3123 log.go:172] (0xc0000e5080) Data frame received for 5\nI0626 00:55:20.328250 3123 log.go:172] (0xc000696dc0) (5) Data frame handling\nI0626 00:55:20.328255 3123 log.go:172] (0xc000696dc0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.21.223:80/\nI0626 00:55:20.335359 3123 log.go:172] (0xc0000e5080) Data frame received for 3\nI0626 00:55:20.335462 3123 log.go:172] (0xc0006968c0) (3) Data frame handling\nI0626 00:55:20.335546 3123 log.go:172] (0xc0006968c0) (3) Data frame sent\nI0626 00:55:20.336341 3123 log.go:172] (0xc0000e5080) Data frame received for 5\nI0626 00:55:20.336368 3123 log.go:172] (0xc000696dc0) (5) Data frame handling\nI0626 00:55:20.336384 3123 log.go:172] (0xc000696dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.21.223:80/\nI0626 00:55:20.336397 3123 log.go:172] (0xc0000e5080) Data frame received for 3\nI0626 00:55:20.336404 3123 log.go:172] (0xc0006968c0) (3) Data frame handling\nI0626 00:55:20.336411 3123 log.go:172] (0xc0006968c0) (3) Data frame sent\nI0626 00:55:20.342461 3123 log.go:172] (0xc0000e5080) Data frame received for 3\nI0626 00:55:20.342486 3123 log.go:172] (0xc0006968c0) (3) Data frame handling\nI0626 00:55:20.342508 3123 log.go:172] (0xc0006968c0) (3) Data frame sent\nI0626 00:55:20.343076 3123 log.go:172] (0xc0000e5080) Data frame received for 3\nI0626 00:55:20.343088 3123 log.go:172] (0xc0006968c0) (3) Data frame handling\nI0626 00:55:20.343099 3123 log.go:172] (0xc0000e5080) Data frame received for 5\nI0626 00:55:20.343114 3123 log.go:172] (0xc000696dc0) (5) Data frame handling\nI0626 00:55:20.343123 3123 log.go:172] (0xc000696dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.21.223:80/\nI0626 00:55:20.343136 3123 log.go:172] (0xc0006968c0) (3) Data frame sent\nI0626 00:55:20.347367 3123 log.go:172] (0xc0000e5080) Data frame received for 3\nI0626 00:55:20.347383 3123 log.go:172] (0xc0006968c0) (3) Data frame handling\nI0626 00:55:20.347398 3123 log.go:172] (0xc0006968c0) (3) Data frame sent\nI0626 00:55:20.347844 3123 log.go:172] (0xc0000e5080) Data frame received for 3\nI0626 00:55:20.347867 3123 log.go:172] (0xc0006968c0) (3) Data frame handling\nI0626 00:55:20.347885 3123 log.go:172] (0xc0006968c0) (3) Data frame sent\nI0626 00:55:20.347903 3123 log.go:172] (0xc0000e5080) Data frame received for 5\nI0626 00:55:20.347915 3123 log.go:172] (0xc000696dc0) (5) Data frame handling\nI0626 00:55:20.347929 3123 log.go:172] (0xc000696dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.21.223:80/\nI0626 00:55:20.351995 3123 log.go:172] (0xc0000e5080) Data frame received for 3\nI0626 00:55:20.352013 3123 log.go:172] (0xc0006968c0) (3) Data frame handling\nI0626 00:55:20.352041 3123 log.go:172] (0xc0006968c0) (3) Data frame sent\nI0626 00:55:20.352681 3123 log.go:172] (0xc0000e5080) Data frame received for 5\nI0626 00:55:20.352696 3123 log.go:172] (0xc000696dc0) (5) Data frame handling\nI0626 00:55:20.352702 3123 log.go:172] (0xc000696dc0) (5) Data frame sent\nI0626 00:55:20.352708 3123 log.go:172] (0xc0000e5080) Data frame received for 5\nI0626 00:55:20.352714 3123 log.go:172] (0xc000696dc0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.21.223:80/\nI0626 00:55:20.352728 3123 log.go:172] (0xc0000e5080) Data frame received for 3\nI0626 00:55:20.352755 3123 log.go:172] (0xc0006968c0) (3) Data frame handling\nI0626 00:55:20.352770 3123 log.go:172] (0xc0006968c0) (3) Data frame sent\nI0626 00:55:20.352779 3123 log.go:172] (0xc000696dc0) (5) Data frame sent\nI0626 00:55:20.360008 3123 log.go:172] (0xc0000e5080) Data frame received for 3\nI0626 00:55:20.360022 3123 log.go:172] (0xc0006968c0) (3) Data frame handling\nI0626 00:55:20.360031 3123 log.go:172] (0xc0006968c0) (3) Data frame sent\nI0626 00:55:20.360482 3123 log.go:172] (0xc0000e5080) Data frame received for 5\nI0626 00:55:20.360500 3123 log.go:172] (0xc000696dc0) (5) Data frame handling\nI0626 00:55:20.360519 3123 log.go:172] (0xc000696dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.21.223:80/\nI0626 00:55:20.360643 3123 log.go:172] (0xc0000e5080) Data frame received for 3\nI0626 00:55:20.360655 3123 log.go:172] (0xc0006968c0) (3) Data frame handling\nI0626 00:55:20.360665 3123 log.go:172] (0xc0006968c0) (3) Data frame sent\nI0626 00:55:20.363934 3123 log.go:172] (0xc0000e5080) Data frame received for 3\nI0626 00:55:20.363964 3123 log.go:172] (0xc0006968c0) (3) Data frame handling\nI0626 00:55:20.363986 3123 log.go:172] (0xc0006968c0) (3) Data frame sent\nI0626 00:55:20.364143 3123 log.go:172] (0xc0000e5080) Data frame received for 5\nI0626 00:55:20.364154 3123 log.go:172] (0xc000696dc0) (5) Data frame handling\nI0626 00:55:20.364161 3123 log.go:172] (0xc000696dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.21.223:80/\nI0626 00:55:20.364182 3123 log.go:172] (0xc0000e5080) Data frame received for 3\nI0626 00:55:20.364205 3123 log.go:172] (0xc0006968c0) (3) Data frame handling\nI0626 00:55:20.364224 3123 log.go:172] (0xc0006968c0) (3) Data frame sent\nI0626 00:55:20.367652 3123 log.go:172] (0xc0000e5080) Data frame received for 3\nI0626 00:55:20.367663 3123 log.go:172] (0xc0006968c0) (3) Data frame handling\nI0626 00:55:20.367675 3123 log.go:172] (0xc0006968c0) (3) Data frame sent\nI0626 00:55:20.368008 3123 log.go:172] (0xc0000e5080) Data frame received for 3\nI0626 00:55:20.368017 3123 log.go:172] (0xc0006968c0) (3) Data frame handling\nI0626 00:55:20.368023 3123 log.go:172] (0xc0006968c0) (3) Data frame sent\nI0626 00:55:20.368036 3123 log.go:172] (0xc0000e5080) Data frame received for 5\nI0626 00:55:20.368062 3123 log.go:172] (0xc000696dc0) (5) Data frame handling\nI0626 00:55:20.368077 3123 log.go:172] (0xc000696dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.21.223:80/\nI0626 00:55:20.372224 3123 log.go:172] (0xc0000e5080) Data frame received for 3\nI0626 00:55:20.372240 3123 log.go:172] (0xc0006968c0) (3) Data frame handling\nI0626 00:55:20.372262 3123 log.go:172] (0xc0006968c0) (3) Data frame sent\nI0626 00:55:20.372724 3123 log.go:172] (0xc0000e5080) Data frame received for 5\nI0626 00:55:20.372743 3123 log.go:172] (0xc000696dc0) (5) Data frame handling\nI0626 00:55:20.372762 3123 log.go:172] (0xc000696dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.21.223:80/\nI0626 00:55:20.372844 3123 log.go:172] (0xc0000e5080) Data frame received for 3\nI0626 00:55:20.372863 3123 log.go:172] (0xc0006968c0) (3) Data frame handling\nI0626 00:55:20.372883 3123 log.go:172] (0xc0006968c0) (3) Data frame sent\nI0626 00:55:20.377030 3123 log.go:172] (0xc0000e5080) Data frame received for 3\nI0626 00:55:20.377059 3123 log.go:172] (0xc0006968c0) (3) Data frame handling\nI0626 00:55:20.377083 3123 log.go:172] (0xc0006968c0) (3) Data frame sent\nI0626 00:55:20.377602 3123 log.go:172] (0xc0000e5080) Data frame received for 3\nI0626 00:55:20.377629 3123 log.go:172] (0xc0006968c0) (3) Data frame handling\nI0626 00:55:20.377657 3123 log.go:172] (0xc0006968c0) (3) Data frame sent\nI0626 00:55:20.377813 3123 log.go:172] (0xc0000e5080) Data frame received for 5\nI0626 00:55:20.377826 3123 log.go:172] (0xc000696dc0) (5) Data frame handling\nI0626 00:55:20.377832 3123 log.go:172] (0xc000696dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.21.223:80/\nI0626 00:55:20.381658 3123 log.go:172] (0xc0000e5080) Data frame received for 3\nI0626 00:55:20.381668 3123 log.go:172] (0xc0006968c0) (3) Data frame handling\nI0626 00:55:20.381673 3123 log.go:172] (0xc0006968c0) (3) Data frame sent\nI0626 00:55:20.382140 3123 log.go:172] (0xc0000e5080) Data frame received for 3\nI0626 00:55:20.382152 3123 log.go:172] (0xc0006968c0) (3) Data frame handling\nI0626 00:55:20.382158 3123 log.go:172] (0xc0006968c0) (3) Data frame sent\nI0626 00:55:20.382178 3123 log.go:172] (0xc0000e5080) Data frame received for 5\nI0626 00:55:20.382195 3123 log.go:172] (0xc000696dc0) (5) Data frame handling\nI0626 00:55:20.382213 3123 log.go:172] (0xc000696dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.21.223:80/\nI0626 00:55:20.388186 3123 log.go:172] (0xc0000e5080) Data frame received for 3\nI0626 00:55:20.388202 3123 log.go:172] (0xc0006968c0) (3) Data frame handling\nI0626 00:55:20.388215 3123 log.go:172] (0xc0006968c0) (3) Data frame sent\nI0626 00:55:20.388638 3123 log.go:172] (0xc0000e5080) Data frame received for 5\nI0626 00:55:20.388648 3123 log.go:172] (0xc000696dc0) (5) Data frame handling\nI0626 00:55:20.388654 3123 log.go:172] (0xc000696dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.21.223:80/\nI0626 00:55:20.388717 3123 log.go:172] (0xc0000e5080) Data frame received for 3\nI0626 00:55:20.388731 3123 log.go:172] (0xc0006968c0) (3) Data frame handling\nI0626 00:55:20.388743 3123 log.go:172] (0xc0006968c0) (3) Data frame sent\nI0626 00:55:20.391924 3123 log.go:172] (0xc0000e5080) Data frame received for 3\nI0626 00:55:20.391935 3123 log.go:172] (0xc0006968c0) (3) Data frame handling\nI0626 00:55:20.391941 3123 log.go:172] (0xc0006968c0) (3) Data frame sent\nI0626 00:55:20.392274 3123 log.go:172] (0xc0000e5080) Data frame received for 5\nI0626 00:55:20.392284 3123 log.go:172] (0xc000696dc0) (5) Data frame handling\nI0626 00:55:20.392289 3123 log.go:172] (0xc000696dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.21.223:80/\nI0626 00:55:20.392384 3123 log.go:172] (0xc0000e5080) Data frame received for 3\nI0626 00:55:20.392395 3123 log.go:172] (0xc0006968c0) (3) Data frame handling\nI0626 00:55:20.392401 3123 log.go:172] (0xc0006968c0) (3) Data frame sent\nI0626 00:55:20.396556 3123 log.go:172] (0xc0000e5080) Data frame received for 3\nI0626 00:55:20.396577 3123 log.go:172] (0xc0006968c0) (3) Data frame handling\nI0626 00:55:20.396591 3123 log.go:172] (0xc0006968c0) (3) Data frame sent\nI0626 00:55:20.396919 3123 log.go:172] (0xc0000e5080) Data frame received for 5\nI0626 00:55:20.396947 3123 log.go:172] (0xc000696dc0) (5) Data frame handling\nI0626 00:55:20.396961 3123 log.go:172] (0xc000696dc0) (5) Data frame sent\nI0626 00:55:20.396972 3123 log.go:172] (0xc0000e5080) Data frame received for 5\nI0626 00:55:20.396981 3123 log.go:172] (0xc000696dc0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.21.223:80/\nI0626 00:55:20.397005 3123 log.go:172] (0xc000696dc0) (5) Data frame sent\nI0626 00:55:20.397065 3123 log.go:172] (0xc0000e5080) Data frame received for 3\nI0626 00:55:20.397081 3123 log.go:172] (0xc0006968c0) (3) Data frame handling\nI0626 00:55:20.397097 3123 log.go:172] (0xc0006968c0) (3) Data frame sent\nI0626 00:55:20.401336 3123 log.go:172] (0xc0000e5080) Data frame received for 3\nI0626 00:55:20.401350 3123 log.go:172] (0xc0006968c0) (3) Data frame handling\nI0626 00:55:20.401357 3123 log.go:172] (0xc0006968c0) (3) Data frame sent\nI0626 00:55:20.401872 3123 log.go:172] (0xc0000e5080) Data frame received for 5\nI0626 00:55:20.401894 3123 log.go:172] (0xc000696dc0) (5) Data frame handling\nI0626 00:55:20.401910 3123 log.go:172] (0xc000696dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.21.223:80/\nI0626 00:55:20.401985 3123 log.go:172] (0xc0000e5080) Data frame received for 3\nI0626 00:55:20.402001 3123 log.go:172] (0xc0006968c0) (3) Data frame handling\nI0626 00:55:20.402016 3123 log.go:172] (0xc0006968c0) (3) Data frame sent\nI0626 00:55:20.405471 3123 log.go:172] (0xc0000e5080) Data frame received for 3\nI0626 00:55:20.405491 3123 log.go:172] (0xc0006968c0) (3) Data frame handling\nI0626 00:55:20.405504 3123 log.go:172] (0xc0006968c0) (3) Data frame sent\nI0626 00:55:20.405783 3123 log.go:172] (0xc0000e5080) Data frame received for 3\nI0626 00:55:20.405805 3123 log.go:172] (0xc0006968c0) (3) Data frame handling\nI0626 00:55:20.405816 3123 log.go:172] (0xc0006968c0) (3) Data frame sent\nI0626 00:55:20.405825 3123 log.go:172] (0xc0000e5080) Data frame received for 5\nI0626 00:55:20.405830 3123 log.go:172] (0xc000696dc0) (5) Data frame handling\nI0626 00:55:20.405836 3123 log.go:172] (0xc000696dc0) (5) Data frame sent\nI0626 00:55:20.405841 3123 log.go:172] (0xc0000e5080) Data frame received for 5\nI0626 00:55:20.405846 3123 log.go:172] (0xc000696dc0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.21.223:80/\nI0626 00:55:20.405861 3123 log.go:172] (0xc000696dc0) (5) Data frame sent\nI0626 00:55:20.410099 3123 log.go:172] (0xc0000e5080) Data frame received for 3\nI0626 00:55:20.410112 3123 log.go:172] (0xc0006968c0) (3) Data frame handling\nI0626 00:55:20.410123 3123 log.go:172] (0xc0006968c0) (3) Data frame sent\nI0626 00:55:20.411214 3123 log.go:172] (0xc0000e5080) Data frame received for 5\nI0626 00:55:20.411238 3123 log.go:172] (0xc000696dc0) (5) Data frame handling\nI0626 00:55:20.411318 3123 log.go:172] (0xc0000e5080) Data frame received for 3\nI0626 00:55:20.411340 3123 log.go:172] (0xc0006968c0) (3) Data frame handling\nI0626 00:55:20.413074 3123 log.go:172] (0xc0000e5080) Data frame received for 1\nI0626 00:55:20.413096 3123 log.go:172] (0xc0006b19a0) (1) Data frame handling\nI0626 00:55:20.413239 3123 log.go:172] (0xc0006b19a0) (1) Data frame sent\nI0626 00:55:20.413261 3123 log.go:172] (0xc0000e5080) (0xc0006b19a0) Stream removed, broadcasting: 1\nI0626 00:55:20.413540 3123 log.go:172] (0xc0000e5080) Go away received\nI0626 00:55:20.413576 3123 log.go:172] (0xc0000e5080) (0xc0006b19a0) Stream removed, broadcasting: 1\nI0626 00:55:20.413601 3123 log.go:172] (0xc0000e5080) (0xc0006968c0) Stream removed, broadcasting: 3\nI0626 00:55:20.413608 3123 log.go:172] (0xc0000e5080) (0xc000696dc0) Stream removed, broadcasting: 5\n" Jun 26 00:55:20.422: INFO: stdout: "\naffinity-clusterip-transition-4xzbr\naffinity-clusterip-transition-4xzbr\naffinity-clusterip-transition-4xzbr\naffinity-clusterip-transition-4xzbr\naffinity-clusterip-transition-4xzbr\naffinity-clusterip-transition-4xzbr\naffinity-clusterip-transition-4xzbr\naffinity-clusterip-transition-4xzbr\naffinity-clusterip-transition-4xzbr\naffinity-clusterip-transition-4xzbr\naffinity-clusterip-transition-4xzbr\naffinity-clusterip-transition-4xzbr\naffinity-clusterip-transition-4xzbr\naffinity-clusterip-transition-4xzbr\naffinity-clusterip-transition-4xzbr\naffinity-clusterip-transition-4xzbr" Jun 26 00:55:20.422: INFO: Received response from host: Jun 26 00:55:20.422: INFO: Received response from host: affinity-clusterip-transition-4xzbr Jun 26 00:55:20.422: INFO: Received response from host: affinity-clusterip-transition-4xzbr Jun 26 00:55:20.422: INFO: Received response from host: affinity-clusterip-transition-4xzbr Jun 26 00:55:20.422: INFO: Received response from host: affinity-clusterip-transition-4xzbr Jun 26 00:55:20.422: INFO: Received response from host: affinity-clusterip-transition-4xzbr Jun 26 00:55:20.422: INFO: Received response from host: affinity-clusterip-transition-4xzbr Jun 26 00:55:20.422: INFO: Received response from host: affinity-clusterip-transition-4xzbr Jun 26 00:55:20.422: INFO: Received response from host: affinity-clusterip-transition-4xzbr Jun 26 00:55:20.422: INFO: Received response from host: affinity-clusterip-transition-4xzbr Jun 26 00:55:20.422: INFO: Received response from host: affinity-clusterip-transition-4xzbr Jun 26 00:55:20.422: INFO: Received response from host: affinity-clusterip-transition-4xzbr Jun 26 00:55:20.422: INFO: Received response from host: affinity-clusterip-transition-4xzbr Jun 26 00:55:20.422: INFO: Received response from host: affinity-clusterip-transition-4xzbr Jun 26 00:55:20.422: INFO: Received response from host: affinity-clusterip-transition-4xzbr Jun 26 00:55:20.422: INFO: Received response from host: affinity-clusterip-transition-4xzbr Jun 26 00:55:20.422: INFO: Received response from host: affinity-clusterip-transition-4xzbr Jun 26 00:55:20.422: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-3464, will wait for the garbage collector to delete the pods Jun 26 00:55:20.610: INFO: Deleting ReplicationController affinity-clusterip-transition took: 86.647898ms Jun 26 00:55:21.010: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 400.274465ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:55:34.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3464" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:813 • [SLOW TEST:26.833 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":294,"completed":236,"skipped":3731,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:55:34.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0626 00:55:45.056695 8 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 26 00:55:45.056: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:55:45.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1660" for this suite. • [SLOW TEST:10.081 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":294,"completed":237,"skipped":3741,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:55:45.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:55:45.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-814" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":294,"completed":238,"skipped":3770,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:55:45.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath Jun 26 00:55:45.379: INFO: Waiting up to 5m0s for pod "var-expansion-b3a461b5-d66f-468a-9df4-4f5e09b2c3f7" in namespace "var-expansion-6336" to be "Succeeded or Failed" Jun 26 00:55:45.438: INFO: Pod "var-expansion-b3a461b5-d66f-468a-9df4-4f5e09b2c3f7": Phase="Pending", Reason="", readiness=false. Elapsed: 59.094186ms Jun 26 00:55:47.441: INFO: Pod "var-expansion-b3a461b5-d66f-468a-9df4-4f5e09b2c3f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062387057s Jun 26 00:55:49.445: INFO: Pod "var-expansion-b3a461b5-d66f-468a-9df4-4f5e09b2c3f7": Phase="Running", Reason="", readiness=true. Elapsed: 4.066346946s Jun 26 00:55:51.449: INFO: Pod "var-expansion-b3a461b5-d66f-468a-9df4-4f5e09b2c3f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.070276163s STEP: Saw pod success Jun 26 00:55:51.449: INFO: Pod "var-expansion-b3a461b5-d66f-468a-9df4-4f5e09b2c3f7" satisfied condition "Succeeded or Failed" Jun 26 00:55:51.452: INFO: Trying to get logs from node latest-worker2 pod var-expansion-b3a461b5-d66f-468a-9df4-4f5e09b2c3f7 container dapi-container: STEP: delete the pod Jun 26 00:55:51.507: INFO: Waiting for pod var-expansion-b3a461b5-d66f-468a-9df4-4f5e09b2c3f7 to disappear Jun 26 00:55:51.528: INFO: Pod var-expansion-b3a461b5-d66f-468a-9df4-4f5e09b2c3f7 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:55:51.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6336" for this suite. • [SLOW TEST:6.312 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":294,"completed":239,"skipped":3780,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:55:51.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jun 26 00:55:51.623: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. Jun 26 00:55:52.116: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jun 26 00:55:54.387: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728729752, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728729752, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728729752, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728729752, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 26 00:55:56.391: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728729752, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728729752, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728729752, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728729752, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 26 00:55:59.020: INFO: Waited 625.895517ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:55:59.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-3630" for this suite. • [SLOW TEST:8.443 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":294,"completed":240,"skipped":3823,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:56:00.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium Jun 26 00:56:00.439: INFO: Waiting up to 5m0s for pod "pod-ac75eb2f-99a1-4c95-b76e-9c6db37b2c10" in namespace "emptydir-2386" to be "Succeeded or Failed" Jun 26 00:56:00.548: INFO: Pod "pod-ac75eb2f-99a1-4c95-b76e-9c6db37b2c10": Phase="Pending", Reason="", readiness=false. Elapsed: 109.363669ms Jun 26 00:56:02.572: INFO: Pod "pod-ac75eb2f-99a1-4c95-b76e-9c6db37b2c10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13311315s Jun 26 00:56:04.590: INFO: Pod "pod-ac75eb2f-99a1-4c95-b76e-9c6db37b2c10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.151135825s STEP: Saw pod success Jun 26 00:56:04.590: INFO: Pod "pod-ac75eb2f-99a1-4c95-b76e-9c6db37b2c10" satisfied condition "Succeeded or Failed" Jun 26 00:56:04.594: INFO: Trying to get logs from node latest-worker2 pod pod-ac75eb2f-99a1-4c95-b76e-9c6db37b2c10 container test-container: STEP: delete the pod Jun 26 00:56:04.629: INFO: Waiting for pod pod-ac75eb2f-99a1-4c95-b76e-9c6db37b2c10 to disappear Jun 26 00:56:04.634: INFO: Pod pod-ac75eb2f-99a1-4c95-b76e-9c6db37b2c10 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:56:04.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2386" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":241,"skipped":3855,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:56:04.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Jun 26 00:56:04.691: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4827' Jun 26 00:56:05.185: INFO: stderr: "" Jun 26 00:56:05.185: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jun 26 00:56:06.213: INFO: Selector matched 1 pods for map[app:agnhost] Jun 26 00:56:06.213: INFO: Found 0 / 1 Jun 26 00:56:07.189: INFO: Selector matched 1 pods for map[app:agnhost] Jun 26 00:56:07.190: INFO: Found 0 / 1 Jun 26 00:56:08.190: INFO: Selector matched 1 pods for map[app:agnhost] Jun 26 00:56:08.190: INFO: Found 0 / 1 Jun 26 00:56:09.189: INFO: Selector matched 1 pods for map[app:agnhost] Jun 26 00:56:09.189: INFO: Found 1 / 1 Jun 26 00:56:09.190: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jun 26 00:56:09.192: INFO: Selector matched 1 pods for map[app:agnhost] Jun 26 00:56:09.192: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 26 00:56:09.192: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config patch pod agnhost-master-p66lg --namespace=kubectl-4827 -p {"metadata":{"annotations":{"x":"y"}}}' Jun 26 00:56:09.301: INFO: stderr: "" Jun 26 00:56:09.302: INFO: stdout: "pod/agnhost-master-p66lg patched\n" STEP: checking annotations Jun 26 00:56:09.366: INFO: Selector matched 1 pods for map[app:agnhost] Jun 26 00:56:09.366: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:56:09.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4827" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":294,"completed":242,"skipped":3862,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:56:09.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-9282 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 26 00:56:09.429: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 26 00:56:09.535: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 26 00:56:11.540: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 26 00:56:13.539: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 26 00:56:15.540: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 26 00:56:17.539: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 26 00:56:19.540: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 26 00:56:21.539: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 26 00:56:23.539: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 26 00:56:25.540: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 26 00:56:27.538: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 26 00:56:29.539: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 26 00:56:31.539: INFO: The status of Pod netserver-0 is Running (Ready = true) Jun 26 00:56:31.546: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jun 26 00:56:35.570: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.230:8080/dial?request=hostname&protocol=http&host=10.244.1.229&port=8080&tries=1'] Namespace:pod-network-test-9282 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 00:56:35.570: INFO: >>> kubeConfig: /root/.kube/config I0626 00:56:35.607376 8 log.go:172] (0xc00372a630) (0xc001c82c80) Create stream I0626 00:56:35.607408 8 log.go:172] (0xc00372a630) (0xc001c82c80) Stream added, broadcasting: 1 I0626 00:56:35.610235 8 log.go:172] (0xc00372a630) Reply frame received for 1 I0626 00:56:35.610303 8 log.go:172] (0xc00372a630) (0xc00227ad20) Create stream I0626 00:56:35.610334 8 log.go:172] (0xc00372a630) (0xc00227ad20) Stream added, broadcasting: 3 I0626 00:56:35.611516 8 log.go:172] (0xc00372a630) Reply frame received for 3 I0626 00:56:35.611590 8 log.go:172] (0xc00372a630) (0xc00227adc0) Create stream I0626 00:56:35.611611 8 log.go:172] (0xc00372a630) (0xc00227adc0) Stream added, broadcasting: 5 I0626 00:56:35.612879 8 log.go:172] (0xc00372a630) Reply frame received for 5 I0626 00:56:35.679728 8 log.go:172] (0xc00372a630) Data frame received for 3 I0626 00:56:35.679767 8 log.go:172] (0xc00227ad20) (3) Data frame handling I0626 00:56:35.679800 8 log.go:172] (0xc00227ad20) (3) Data frame sent I0626 00:56:35.680300 8 log.go:172] (0xc00372a630) Data frame received for 5 I0626 00:56:35.680336 8 log.go:172] (0xc00227adc0) (5) Data frame handling I0626 00:56:35.680367 8 log.go:172] (0xc00372a630) Data frame received for 3 I0626 00:56:35.680392 8 log.go:172] (0xc00227ad20) (3) Data frame handling I0626 00:56:35.682634 8 log.go:172] (0xc00372a630) Data frame received for 1 I0626 00:56:35.682678 8 log.go:172] (0xc001c82c80) (1) Data frame handling I0626 00:56:35.682716 8 log.go:172] (0xc001c82c80) (1) Data frame sent I0626 00:56:35.682737 8 log.go:172] (0xc00372a630) (0xc001c82c80) Stream removed, broadcasting: 1 I0626 00:56:35.682756 8 log.go:172] (0xc00372a630) Go away received I0626 00:56:35.682914 8 log.go:172] (0xc00372a630) (0xc001c82c80) Stream removed, broadcasting: 1 I0626 00:56:35.682947 8 log.go:172] (0xc00372a630) (0xc00227ad20) Stream removed, broadcasting: 3 I0626 00:56:35.682971 8 log.go:172] (0xc00372a630) (0xc00227adc0) Stream removed, broadcasting: 5 Jun 26 00:56:35.683: INFO: Waiting for responses: map[] Jun 26 00:56:35.710: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.230:8080/dial?request=hostname&protocol=http&host=10.244.2.40&port=8080&tries=1'] Namespace:pod-network-test-9282 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 00:56:35.710: INFO: >>> kubeConfig: /root/.kube/config I0626 00:56:35.746680 8 log.go:172] (0xc00372abb0) (0xc001c83040) Create stream I0626 00:56:35.746720 8 log.go:172] (0xc00372abb0) (0xc001c83040) Stream added, broadcasting: 1 I0626 00:56:35.748768 8 log.go:172] (0xc00372abb0) Reply frame received for 1 I0626 00:56:35.748810 8 log.go:172] (0xc00372abb0) (0xc001c83220) Create stream I0626 00:56:35.748825 8 log.go:172] (0xc00372abb0) (0xc001c83220) Stream added, broadcasting: 3 I0626 00:56:35.750046 8 log.go:172] (0xc00372abb0) Reply frame received for 3 I0626 00:56:35.750089 8 log.go:172] (0xc00372abb0) (0xc00227af00) Create stream I0626 00:56:35.750102 8 log.go:172] (0xc00372abb0) (0xc00227af00) Stream added, broadcasting: 5 I0626 00:56:35.751071 8 log.go:172] (0xc00372abb0) Reply frame received for 5 I0626 00:56:35.827262 8 log.go:172] (0xc00372abb0) Data frame received for 3 I0626 00:56:35.827300 8 log.go:172] (0xc001c83220) (3) Data frame handling I0626 00:56:35.827331 8 log.go:172] (0xc001c83220) (3) Data frame sent I0626 00:56:35.828150 8 log.go:172] (0xc00372abb0) Data frame received for 3 I0626 00:56:35.828184 8 log.go:172] (0xc001c83220) (3) Data frame handling I0626 00:56:35.828205 8 log.go:172] (0xc00372abb0) Data frame received for 5 I0626 00:56:35.828212 8 log.go:172] (0xc00227af00) (5) Data frame handling I0626 00:56:35.830152 8 log.go:172] (0xc00372abb0) Data frame received for 1 I0626 00:56:35.830181 8 log.go:172] (0xc001c83040) (1) Data frame handling I0626 00:56:35.830203 8 log.go:172] (0xc001c83040) (1) Data frame sent I0626 00:56:35.830225 8 log.go:172] (0xc00372abb0) (0xc001c83040) Stream removed, broadcasting: 1 I0626 00:56:35.830252 8 log.go:172] (0xc00372abb0) Go away received I0626 00:56:35.830331 8 log.go:172] (0xc00372abb0) (0xc001c83040) Stream removed, broadcasting: 1 I0626 00:56:35.830367 8 log.go:172] (0xc00372abb0) (0xc001c83220) Stream removed, broadcasting: 3 I0626 00:56:35.830381 8 log.go:172] (0xc00372abb0) (0xc00227af00) Stream removed, broadcasting: 5 Jun 26 00:56:35.830: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:56:35.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9282" for this suite. • [SLOW TEST:26.465 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":294,"completed":243,"skipped":3878,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:56:35.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 26 00:56:35.901: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jun 26 00:56:40.914: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 26 00:56:40.914: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Jun 26 00:56:40.959: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-9997 /apis/apps/v1/namespaces/deployment-9997/deployments/test-cleanup-deployment 4bcd2a7a-dfbb-459e-b1c8-656ffe125a44 15928853 1 2020-06-26 00:56:40 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2020-06-26 00:56:40 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004ff0be8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Jun 26 00:56:41.067: INFO: New ReplicaSet "test-cleanup-deployment-6688745694" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-6688745694 deployment-9997 /apis/apps/v1/namespaces/deployment-9997/replicasets/test-cleanup-deployment-6688745694 93a24429-092d-4f20-b720-1bef040faecf 15928862 1 2020-06-26 00:56:40 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 4bcd2a7a-dfbb-459e-b1c8-656ffe125a44 0xc004ff1147 0xc004ff1148}] [] [{kube-controller-manager Update apps/v1 2020-06-26 00:56:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4bcd2a7a-dfbb-459e-b1c8-656ffe125a44\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 6688745694,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004ff1208 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 26 00:56:41.067: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jun 26 00:56:41.067: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-9997 /apis/apps/v1/namespaces/deployment-9997/replicasets/test-cleanup-controller cfae1545-b715-4690-8608-5de6b96ad60e 15928854 1 2020-06-26 00:56:35 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 4bcd2a7a-dfbb-459e-b1c8-656ffe125a44 0xc004ff100f 0xc004ff1020}] [] [{e2e.test Update apps/v1 2020-06-26 00:56:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-06-26 00:56:40 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"4bcd2a7a-dfbb-459e-b1c8-656ffe125a44\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004ff10c8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jun 26 00:56:41.099: INFO: Pod "test-cleanup-controller-ps424" is available: &Pod{ObjectMeta:{test-cleanup-controller-ps424 test-cleanup-controller- deployment-9997 /api/v1/namespaces/deployment-9997/pods/test-cleanup-controller-ps424 d0fde1a8-f202-4bee-b0a4-876938ffb675 15928845 0 2020-06-26 00:56:35 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller cfae1545-b715-4690-8608-5de6b96ad60e 0xc001fdd237 0xc001fdd238}] [] [{kube-controller-manager Update v1 2020-06-26 00:56:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cfae1545-b715-4690-8608-5de6b96ad60e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-26 00:56:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.41\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kvz4l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kvz4l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kvz4l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:56:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:56:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:56:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:56:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.41,StartTime:2020-06-26 00:56:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-26 00:56:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://52c3299e679265f0cd261ec0d89ca40932f71645f216660a6fba4893a5319bb9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.41,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 26 00:56:41.099: INFO: Pod "test-cleanup-deployment-6688745694-58nkl" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-6688745694-58nkl test-cleanup-deployment-6688745694- deployment-9997 /api/v1/namespaces/deployment-9997/pods/test-cleanup-deployment-6688745694-58nkl 6f1f89e7-7351-424a-ac60-9a9f9ce48509 15928867 0 2020-06-26 00:56:40 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-6688745694 93a24429-092d-4f20-b720-1bef040faecf 0xc001fdd437 0xc001fdd438}] [] [{kube-controller-manager Update v1 2020-06-26 00:56:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93a24429-092d-4f20-b720-1bef040faecf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-26 00:56:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kvz4l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kvz4l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kvz4l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:56:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:56:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [agnhost],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:56:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [agnhost],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 00:56:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-06-26 00:56:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:56:41.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9997" for this suite. • [SLOW TEST:5.269 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":294,"completed":244,"skipped":3881,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSS ------------------------------ [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:56:41.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod var-expansion-6bb18f45-8ce5-4821-a0e5-8e404e5f4273 STEP: updating the pod Jun 26 00:56:51.830: INFO: Successfully updated pod "var-expansion-6bb18f45-8ce5-4821-a0e5-8e404e5f4273" STEP: waiting for pod and container restart STEP: Failing liveness probe Jun 26 00:56:51.855: INFO: ExecWithOptions {Command:[/bin/sh -c rm /volume_mount/foo/test.log] Namespace:var-expansion-2094 PodName:var-expansion-6bb18f45-8ce5-4821-a0e5-8e404e5f4273 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 00:56:51.855: INFO: >>> kubeConfig: /root/.kube/config I0626 00:56:51.909890 8 log.go:172] (0xc00369eb00) (0xc002c588c0) Create stream I0626 00:56:51.909944 8 log.go:172] (0xc00369eb00) (0xc002c588c0) Stream added, broadcasting: 1 I0626 00:56:51.912182 8 log.go:172] (0xc00369eb00) Reply frame received for 1 I0626 00:56:51.912320 8 log.go:172] (0xc00369eb00) (0xc002c58b40) Create stream I0626 00:56:51.912417 8 log.go:172] (0xc00369eb00) (0xc002c58b40) Stream added, broadcasting: 3 I0626 00:56:51.913720 8 log.go:172] (0xc00369eb00) Reply frame received for 3 I0626 00:56:51.913769 8 log.go:172] (0xc00369eb00) (0xc0011701e0) Create stream I0626 00:56:51.913786 8 log.go:172] (0xc00369eb00) (0xc0011701e0) Stream added, broadcasting: 5 I0626 00:56:51.914897 8 log.go:172] (0xc00369eb00) Reply frame received for 5 I0626 00:56:52.008863 8 log.go:172] (0xc00369eb00) Data frame received for 3 I0626 00:56:52.008901 8 log.go:172] (0xc002c58b40) (3) Data frame handling I0626 00:56:52.009402 8 log.go:172] (0xc00369eb00) Data frame received for 5 I0626 00:56:52.009435 8 log.go:172] (0xc0011701e0) (5) Data frame handling I0626 00:56:52.011005 8 log.go:172] (0xc00369eb00) Data frame received for 1 I0626 00:56:52.011022 8 log.go:172] (0xc002c588c0) (1) Data frame handling I0626 00:56:52.011045 8 log.go:172] (0xc002c588c0) (1) Data frame sent I0626 00:56:52.011059 8 log.go:172] (0xc00369eb00) (0xc002c588c0) Stream removed, broadcasting: 1 I0626 00:56:52.011126 8 log.go:172] (0xc00369eb00) (0xc002c588c0) Stream removed, broadcasting: 1 I0626 00:56:52.011138 8 log.go:172] (0xc00369eb00) (0xc002c58b40) Stream removed, broadcasting: 3 I0626 00:56:52.011162 8 log.go:172] (0xc00369eb00) (0xc0011701e0) Stream removed, broadcasting: 5 Jun 26 00:56:52.011: INFO: Pod exec output: / STEP: Waiting for container to restart I0626 00:56:52.011293 8 log.go:172] (0xc00369eb00) Go away received Jun 26 00:56:52.014: INFO: Container dapi-container, restarts: 0 Jun 26 00:57:02.019: INFO: Container dapi-container, restarts: 0 Jun 26 00:57:12.018: INFO: Container dapi-container, restarts: 0 Jun 26 00:57:22.018: INFO: Container dapi-container, restarts: 0 Jun 26 00:57:32.019: INFO: Container dapi-container, restarts: 1 Jun 26 00:57:32.019: INFO: Container has restart count: 1 STEP: Rewriting the file Jun 26 00:57:32.019: INFO: ExecWithOptions {Command:[/bin/sh -c echo test-after > /volume_mount/foo/test.log] Namespace:var-expansion-2094 PodName:var-expansion-6bb18f45-8ce5-4821-a0e5-8e404e5f4273 ContainerName:side-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 00:57:32.019: INFO: >>> kubeConfig: /root/.kube/config I0626 00:57:32.055433 8 log.go:172] (0xc00293f340) (0xc0012e7400) Create stream I0626 00:57:32.055465 8 log.go:172] (0xc00293f340) (0xc0012e7400) Stream added, broadcasting: 1 I0626 00:57:32.057264 8 log.go:172] (0xc00293f340) Reply frame received for 1 I0626 00:57:32.057317 8 log.go:172] (0xc00293f340) (0xc0027b0000) Create stream I0626 00:57:32.057332 8 log.go:172] (0xc00293f340) (0xc0027b0000) Stream added, broadcasting: 3 I0626 00:57:32.058303 8 log.go:172] (0xc00293f340) Reply frame received for 3 I0626 00:57:32.058344 8 log.go:172] (0xc00293f340) (0xc0012e74a0) Create stream I0626 00:57:32.058356 8 log.go:172] (0xc00293f340) (0xc0012e74a0) Stream added, broadcasting: 5 I0626 00:57:32.059129 8 log.go:172] (0xc00293f340) Reply frame received for 5 I0626 00:57:32.152389 8 log.go:172] (0xc00293f340) Data frame received for 5 I0626 00:57:32.152419 8 log.go:172] (0xc0012e74a0) (5) Data frame handling I0626 00:57:32.152450 8 log.go:172] (0xc00293f340) Data frame received for 3 I0626 00:57:32.152470 8 log.go:172] (0xc0027b0000) (3) Data frame handling I0626 00:57:32.153650 8 log.go:172] (0xc00293f340) Data frame received for 1 I0626 00:57:32.153689 8 log.go:172] (0xc0012e7400) (1) Data frame handling I0626 00:57:32.153710 8 log.go:172] (0xc0012e7400) (1) Data frame sent I0626 00:57:32.153724 8 log.go:172] (0xc00293f340) (0xc0012e7400) Stream removed, broadcasting: 1 I0626 00:57:32.153733 8 log.go:172] (0xc00293f340) Go away received I0626 00:57:32.153857 8 log.go:172] (0xc00293f340) (0xc0012e7400) Stream removed, broadcasting: 1 I0626 00:57:32.153873 8 log.go:172] (0xc00293f340) (0xc0027b0000) Stream removed, broadcasting: 3 I0626 00:57:32.153885 8 log.go:172] (0xc00293f340) (0xc0012e74a0) Stream removed, broadcasting: 5 Jun 26 00:57:32.153: INFO: Exec stderr: "" Jun 26 00:57:32.153: INFO: Pod exec output: STEP: Waiting for container to stop restarting Jun 26 00:58:00.161: INFO: Container has restart count: 2 Jun 26 00:59:02.160: INFO: Container restart has stabilized STEP: test for subpath mounted with old value Jun 26 00:59:02.163: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /volume_mount/foo/test.log] Namespace:var-expansion-2094 PodName:var-expansion-6bb18f45-8ce5-4821-a0e5-8e404e5f4273 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 00:59:02.164: INFO: >>> kubeConfig: /root/.kube/config I0626 00:59:02.195999 8 log.go:172] (0xc001bec2c0) (0xc002adb360) Create stream I0626 00:59:02.196049 8 log.go:172] (0xc001bec2c0) (0xc002adb360) Stream added, broadcasting: 1 I0626 00:59:02.198207 8 log.go:172] (0xc001bec2c0) Reply frame received for 1 I0626 00:59:02.198243 8 log.go:172] (0xc001bec2c0) (0xc0018bef00) Create stream I0626 00:59:02.198255 8 log.go:172] (0xc001bec2c0) (0xc0018bef00) Stream added, broadcasting: 3 I0626 00:59:02.199364 8 log.go:172] (0xc001bec2c0) Reply frame received for 3 I0626 00:59:02.199416 8 log.go:172] (0xc001bec2c0) (0xc002adb4a0) Create stream I0626 00:59:02.199437 8 log.go:172] (0xc001bec2c0) (0xc002adb4a0) Stream added, broadcasting: 5 I0626 00:59:02.200286 8 log.go:172] (0xc001bec2c0) Reply frame received for 5 I0626 00:59:02.273034 8 log.go:172] (0xc001bec2c0) Data frame received for 3 I0626 00:59:02.273069 8 log.go:172] (0xc0018bef00) (3) Data frame handling I0626 00:59:02.273088 8 log.go:172] (0xc001bec2c0) Data frame received for 5 I0626 00:59:02.273098 8 log.go:172] (0xc002adb4a0) (5) Data frame handling I0626 00:59:02.274716 8 log.go:172] (0xc001bec2c0) Data frame received for 1 I0626 00:59:02.274802 8 log.go:172] (0xc002adb360) (1) Data frame handling I0626 00:59:02.274852 8 log.go:172] (0xc002adb360) (1) Data frame sent I0626 00:59:02.274878 8 log.go:172] (0xc001bec2c0) (0xc002adb360) Stream removed, broadcasting: 1 I0626 00:59:02.274909 8 log.go:172] (0xc001bec2c0) Go away received I0626 00:59:02.275329 8 log.go:172] (0xc001bec2c0) (0xc002adb360) Stream removed, broadcasting: 1 I0626 00:59:02.275365 8 log.go:172] (0xc001bec2c0) (0xc0018bef00) Stream removed, broadcasting: 3 I0626 00:59:02.275389 8 log.go:172] (0xc001bec2c0) (0xc002adb4a0) Stream removed, broadcasting: 5 Jun 26 00:59:02.279: INFO: ExecWithOptions {Command:[/bin/sh -c test ! -f /volume_mount/newsubpath/test.log] Namespace:var-expansion-2094 PodName:var-expansion-6bb18f45-8ce5-4821-a0e5-8e404e5f4273 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 00:59:02.279: INFO: >>> kubeConfig: /root/.kube/config I0626 00:59:02.304964 8 log.go:172] (0xc004e320b0) (0xc0028bc820) Create stream I0626 00:59:02.304997 8 log.go:172] (0xc004e320b0) (0xc0028bc820) Stream added, broadcasting: 1 I0626 00:59:02.306843 8 log.go:172] (0xc004e320b0) Reply frame received for 1 I0626 00:59:02.306903 8 log.go:172] (0xc004e320b0) (0xc002b80dc0) Create stream I0626 00:59:02.306925 8 log.go:172] (0xc004e320b0) (0xc002b80dc0) Stream added, broadcasting: 3 I0626 00:59:02.307955 8 log.go:172] (0xc004e320b0) Reply frame received for 3 I0626 00:59:02.307997 8 log.go:172] (0xc004e320b0) (0xc0028bcaa0) Create stream I0626 00:59:02.308010 8 log.go:172] (0xc004e320b0) (0xc0028bcaa0) Stream added, broadcasting: 5 I0626 00:59:02.308935 8 log.go:172] (0xc004e320b0) Reply frame received for 5 I0626 00:59:02.363745 8 log.go:172] (0xc004e320b0) Data frame received for 3 I0626 00:59:02.363783 8 log.go:172] (0xc002b80dc0) (3) Data frame handling I0626 00:59:02.363802 8 log.go:172] (0xc004e320b0) Data frame received for 5 I0626 00:59:02.363826 8 log.go:172] (0xc0028bcaa0) (5) Data frame handling I0626 00:59:02.365426 8 log.go:172] (0xc004e320b0) Data frame received for 1 I0626 00:59:02.365458 8 log.go:172] (0xc0028bc820) (1) Data frame handling I0626 00:59:02.365486 8 log.go:172] (0xc0028bc820) (1) Data frame sent I0626 00:59:02.365504 8 log.go:172] (0xc004e320b0) (0xc0028bc820) Stream removed, broadcasting: 1 I0626 00:59:02.365523 8 log.go:172] (0xc004e320b0) Go away received I0626 00:59:02.365614 8 log.go:172] (0xc004e320b0) (0xc0028bc820) Stream removed, broadcasting: 1 I0626 00:59:02.365629 8 log.go:172] (0xc004e320b0) (0xc002b80dc0) Stream removed, broadcasting: 3 I0626 00:59:02.365635 8 log.go:172] (0xc004e320b0) (0xc0028bcaa0) Stream removed, broadcasting: 5 Jun 26 00:59:02.365: INFO: Deleting pod "var-expansion-6bb18f45-8ce5-4821-a0e5-8e404e5f4273" in namespace "var-expansion-2094" Jun 26 00:59:02.371: INFO: Wait up to 5m0s for pod "var-expansion-6bb18f45-8ce5-4821-a0e5-8e404e5f4273" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:59:36.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2094" for this suite. • [SLOW TEST:175.314 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]","total":294,"completed":245,"skipped":3885,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:59:36.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:59:36.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-67" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":294,"completed":246,"skipped":3906,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:59:36.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 26 00:59:36.648: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 26 00:59:36.668: INFO: Waiting for terminating namespaces to be deleted... Jun 26 00:59:36.671: INFO: Logging pods the apiserver thinks is on node latest-worker before test Jun 26 00:59:36.676: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) Jun 26 00:59:36.676: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 Jun 26 00:59:36.676: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) Jun 26 00:59:36.676: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 Jun 26 00:59:36.676: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jun 26 00:59:36.676: INFO: Container kindnet-cni ready: true, restart count 5 Jun 26 00:59:36.676: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jun 26 00:59:36.676: INFO: Container kube-proxy ready: true, restart count 0 Jun 26 00:59:36.676: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Jun 26 00:59:36.681: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) Jun 26 00:59:36.681: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 Jun 26 00:59:36.681: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) Jun 26 00:59:36.681: INFO: Container terminate-cmd-rpa ready: true, restart count 2 Jun 26 00:59:36.681: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jun 26 00:59:36.681: INFO: Container kindnet-cni ready: true, restart count 5 Jun 26 00:59:36.681: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jun 26 00:59:36.681: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.161bf22ef6eebe8f], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.161bf22ef89f41b6], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:59:37.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3764" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":294,"completed":247,"skipped":3929,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:59:37.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-a7a45246-476b-4e7a-9995-efc35caca766 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-a7a45246-476b-4e7a-9995-efc35caca766 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:59:45.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7119" for this suite. • [SLOW TEST:8.182 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":294,"completed":248,"skipped":3932,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:59:45.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components Jun 26 00:59:45.988: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Jun 26 00:59:45.988: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8741' Jun 26 00:59:46.382: INFO: stderr: "" Jun 26 00:59:46.382: INFO: stdout: "service/agnhost-slave created\n" Jun 26 00:59:46.383: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Jun 26 00:59:46.383: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8741' Jun 26 00:59:46.680: INFO: stderr: "" Jun 26 00:59:46.680: INFO: stdout: "service/agnhost-master created\n" Jun 26 00:59:46.680: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jun 26 00:59:46.680: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8741' Jun 26 00:59:46.992: INFO: stderr: "" Jun 26 00:59:46.992: INFO: stdout: "service/frontend created\n" Jun 26 00:59:46.992: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Jun 26 00:59:46.992: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8741' Jun 26 00:59:47.273: INFO: stderr: "" Jun 26 00:59:47.273: INFO: stdout: "deployment.apps/frontend created\n" Jun 26 00:59:47.273: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jun 26 00:59:47.273: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8741' Jun 26 00:59:47.594: INFO: stderr: "" Jun 26 00:59:47.594: INFO: stdout: "deployment.apps/agnhost-master created\n" Jun 26 00:59:47.595: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jun 26 00:59:47.595: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8741' Jun 26 00:59:47.860: INFO: stderr: "" Jun 26 00:59:47.860: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Jun 26 00:59:47.860: INFO: Waiting for all frontend pods to be Running. Jun 26 00:59:57.910: INFO: Waiting for frontend to serve content. Jun 26 00:59:57.923: INFO: Trying to add a new entry to the guestbook. Jun 26 00:59:57.938: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jun 26 00:59:57.952: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8741' Jun 26 00:59:58.150: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 26 00:59:58.150: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Jun 26 00:59:58.150: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8741' Jun 26 00:59:58.304: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 26 00:59:58.304: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Jun 26 00:59:58.304: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8741' Jun 26 00:59:58.496: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 26 00:59:58.496: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 26 00:59:58.496: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8741' Jun 26 00:59:58.605: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 26 00:59:58.605: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 26 00:59:58.605: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8741' Jun 26 00:59:58.810: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 26 00:59:58.810: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Jun 26 00:59:58.811: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8741' Jun 26 00:59:59.202: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 26 00:59:59.202: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 00:59:59.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8741" for this suite. • [SLOW TEST:13.473 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":294,"completed":249,"skipped":3941,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 00:59:59.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:809 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-4549 STEP: creating replication controller nodeport-test in namespace services-4549 I0626 01:00:00.208458 8 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-4549, replica count: 2 I0626 01:00:03.258884 8 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0626 01:00:06.259170 8 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 26 01:00:06.259: INFO: Creating new exec pod Jun 26 01:00:11.276: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4549 execpodg7rhd -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Jun 26 01:00:11.572: INFO: stderr: "I0626 01:00:11.407238 3420 log.go:172] (0xc000a413f0) (0xc000b36140) Create stream\nI0626 01:00:11.407288 3420 log.go:172] (0xc000a413f0) (0xc000b36140) Stream added, broadcasting: 1\nI0626 01:00:11.411546 3420 log.go:172] (0xc000a413f0) Reply frame received for 1\nI0626 01:00:11.411582 3420 log.go:172] (0xc000a413f0) (0xc000877c20) Create stream\nI0626 01:00:11.411592 3420 log.go:172] (0xc000a413f0) (0xc000877c20) Stream added, broadcasting: 3\nI0626 01:00:11.412473 3420 log.go:172] (0xc000a413f0) Reply frame received for 3\nI0626 01:00:11.412505 3420 log.go:172] (0xc000a413f0) (0xc0006df180) Create stream\nI0626 01:00:11.412516 3420 log.go:172] (0xc000a413f0) (0xc0006df180) Stream added, broadcasting: 5\nI0626 01:00:11.413541 3420 log.go:172] (0xc000a413f0) Reply frame received for 5\nI0626 01:00:11.552974 3420 log.go:172] (0xc000a413f0) Data frame received for 5\nI0626 01:00:11.553008 3420 log.go:172] (0xc0006df180) (5) Data frame handling\nI0626 01:00:11.553023 3420 log.go:172] (0xc0006df180) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0626 01:00:11.564864 3420 log.go:172] (0xc000a413f0) Data frame received for 5\nI0626 01:00:11.564898 3420 log.go:172] (0xc0006df180) (5) Data frame handling\nI0626 01:00:11.564956 3420 log.go:172] (0xc0006df180) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0626 01:00:11.565462 3420 log.go:172] (0xc000a413f0) Data frame received for 3\nI0626 01:00:11.565488 3420 log.go:172] (0xc000877c20) (3) Data frame handling\nI0626 01:00:11.565889 3420 log.go:172] (0xc000a413f0) Data frame received for 5\nI0626 01:00:11.565918 3420 log.go:172] (0xc0006df180) (5) Data frame handling\nI0626 01:00:11.567693 3420 log.go:172] (0xc000a413f0) Data frame received for 1\nI0626 01:00:11.567711 3420 log.go:172] (0xc000b36140) (1) Data frame handling\nI0626 01:00:11.567723 3420 log.go:172] (0xc000b36140) (1) Data frame sent\nI0626 01:00:11.567739 3420 log.go:172] (0xc000a413f0) (0xc000b36140) Stream removed, broadcasting: 1\nI0626 01:00:11.567958 3420 log.go:172] (0xc000a413f0) Go away received\nI0626 01:00:11.568028 3420 log.go:172] (0xc000a413f0) (0xc000b36140) Stream removed, broadcasting: 1\nI0626 01:00:11.568082 3420 log.go:172] (0xc000a413f0) (0xc000877c20) Stream removed, broadcasting: 3\nI0626 01:00:11.568099 3420 log.go:172] (0xc000a413f0) (0xc0006df180) Stream removed, broadcasting: 5\n" Jun 26 01:00:11.572: INFO: stdout: "" Jun 26 01:00:11.573: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4549 execpodg7rhd -- /bin/sh -x -c nc -zv -t -w 2 10.99.96.119 80' Jun 26 01:00:11.767: INFO: stderr: "I0626 01:00:11.697735 3440 log.go:172] (0xc0009e4000) (0xc00071b5e0) Create stream\nI0626 01:00:11.697791 3440 log.go:172] (0xc0009e4000) (0xc00071b5e0) Stream added, broadcasting: 1\nI0626 01:00:11.699373 3440 log.go:172] (0xc0009e4000) Reply frame received for 1\nI0626 01:00:11.699412 3440 log.go:172] (0xc0009e4000) (0xc0000ddf40) Create stream\nI0626 01:00:11.699426 3440 log.go:172] (0xc0009e4000) (0xc0000ddf40) Stream added, broadcasting: 3\nI0626 01:00:11.700398 3440 log.go:172] (0xc0009e4000) Reply frame received for 3\nI0626 01:00:11.700440 3440 log.go:172] (0xc0009e4000) (0xc0001399a0) Create stream\nI0626 01:00:11.700454 3440 log.go:172] (0xc0009e4000) (0xc0001399a0) Stream added, broadcasting: 5\nI0626 01:00:11.701717 3440 log.go:172] (0xc0009e4000) Reply frame received for 5\nI0626 01:00:11.758911 3440 log.go:172] (0xc0009e4000) Data frame received for 5\nI0626 01:00:11.758964 3440 log.go:172] (0xc0001399a0) (5) Data frame handling\nI0626 01:00:11.758975 3440 log.go:172] (0xc0001399a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.99.96.119 80\nConnection to 10.99.96.119 80 port [tcp/http] succeeded!\nI0626 01:00:11.758989 3440 log.go:172] (0xc0009e4000) Data frame received for 3\nI0626 01:00:11.758996 3440 log.go:172] (0xc0000ddf40) (3) Data frame handling\nI0626 01:00:11.759204 3440 log.go:172] (0xc0009e4000) Data frame received for 5\nI0626 01:00:11.759242 3440 log.go:172] (0xc0001399a0) (5) Data frame handling\nI0626 01:00:11.760784 3440 log.go:172] (0xc0009e4000) Data frame received for 1\nI0626 01:00:11.760804 3440 log.go:172] (0xc00071b5e0) (1) Data frame handling\nI0626 01:00:11.760814 3440 log.go:172] (0xc00071b5e0) (1) Data frame sent\nI0626 01:00:11.760962 3440 log.go:172] (0xc0009e4000) (0xc00071b5e0) Stream removed, broadcasting: 1\nI0626 01:00:11.761060 3440 log.go:172] (0xc0009e4000) Go away received\nI0626 01:00:11.761708 3440 log.go:172] (0xc0009e4000) (0xc00071b5e0) Stream removed, broadcasting: 1\nI0626 01:00:11.761745 3440 log.go:172] (0xc0009e4000) (0xc0000ddf40) Stream removed, broadcasting: 3\nI0626 01:00:11.761763 3440 log.go:172] (0xc0009e4000) (0xc0001399a0) Stream removed, broadcasting: 5\n" Jun 26 01:00:11.767: INFO: stdout: "" Jun 26 01:00:11.767: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4549 execpodg7rhd -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 32479' Jun 26 01:00:11.959: INFO: stderr: "I0626 01:00:11.897038 3461 log.go:172] (0xc00072cd10) (0xc000ada6e0) Create stream\nI0626 01:00:11.897089 3461 log.go:172] (0xc00072cd10) (0xc000ada6e0) Stream added, broadcasting: 1\nI0626 01:00:11.901447 3461 log.go:172] (0xc00072cd10) Reply frame received for 1\nI0626 01:00:11.901494 3461 log.go:172] (0xc00072cd10) (0xc00065ad20) Create stream\nI0626 01:00:11.901504 3461 log.go:172] (0xc00072cd10) (0xc00065ad20) Stream added, broadcasting: 3\nI0626 01:00:11.902385 3461 log.go:172] (0xc00072cd10) Reply frame received for 3\nI0626 01:00:11.902418 3461 log.go:172] (0xc00072cd10) (0xc00039aaa0) Create stream\nI0626 01:00:11.902432 3461 log.go:172] (0xc00072cd10) (0xc00039aaa0) Stream added, broadcasting: 5\nI0626 01:00:11.903171 3461 log.go:172] (0xc00072cd10) Reply frame received for 5\nI0626 01:00:11.950519 3461 log.go:172] (0xc00072cd10) Data frame received for 3\nI0626 01:00:11.950549 3461 log.go:172] (0xc00065ad20) (3) Data frame handling\nI0626 01:00:11.950571 3461 log.go:172] (0xc00072cd10) Data frame received for 5\nI0626 01:00:11.950581 3461 log.go:172] (0xc00039aaa0) (5) Data frame handling\nI0626 01:00:11.950605 3461 log.go:172] (0xc00039aaa0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 32479\nConnection to 172.17.0.13 32479 port [tcp/32479] succeeded!\nI0626 01:00:11.950830 3461 log.go:172] (0xc00072cd10) Data frame received for 5\nI0626 01:00:11.950849 3461 log.go:172] (0xc00039aaa0) (5) Data frame handling\nI0626 01:00:11.952379 3461 log.go:172] (0xc00072cd10) Data frame received for 1\nI0626 01:00:11.952407 3461 log.go:172] (0xc000ada6e0) (1) Data frame handling\nI0626 01:00:11.952420 3461 log.go:172] (0xc000ada6e0) (1) Data frame sent\nI0626 01:00:11.952436 3461 log.go:172] (0xc00072cd10) (0xc000ada6e0) Stream removed, broadcasting: 1\nI0626 01:00:11.952835 3461 log.go:172] (0xc00072cd10) (0xc000ada6e0) Stream removed, broadcasting: 1\nI0626 01:00:11.952856 3461 log.go:172] (0xc00072cd10) (0xc00065ad20) Stream removed, broadcasting: 3\nI0626 01:00:11.952868 3461 log.go:172] (0xc00072cd10) (0xc00039aaa0) Stream removed, broadcasting: 5\n" Jun 26 01:00:11.959: INFO: stdout: "" Jun 26 01:00:11.959: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4549 execpodg7rhd -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 32479' Jun 26 01:00:12.167: INFO: stderr: "I0626 01:00:12.090578 3481 log.go:172] (0xc0007aa210) (0xc0008c4e60) Create stream\nI0626 01:00:12.090620 3481 log.go:172] (0xc0007aa210) (0xc0008c4e60) Stream added, broadcasting: 1\nI0626 01:00:12.092113 3481 log.go:172] (0xc0007aa210) Reply frame received for 1\nI0626 01:00:12.092159 3481 log.go:172] (0xc0007aa210) (0xc0008c5400) Create stream\nI0626 01:00:12.092173 3481 log.go:172] (0xc0007aa210) (0xc0008c5400) Stream added, broadcasting: 3\nI0626 01:00:12.093218 3481 log.go:172] (0xc0007aa210) Reply frame received for 3\nI0626 01:00:12.093326 3481 log.go:172] (0xc0007aa210) (0xc0008c5ea0) Create stream\nI0626 01:00:12.093334 3481 log.go:172] (0xc0007aa210) (0xc0008c5ea0) Stream added, broadcasting: 5\nI0626 01:00:12.094545 3481 log.go:172] (0xc0007aa210) Reply frame received for 5\nI0626 01:00:12.158981 3481 log.go:172] (0xc0007aa210) Data frame received for 5\nI0626 01:00:12.159028 3481 log.go:172] (0xc0008c5ea0) (5) Data frame handling\nI0626 01:00:12.159047 3481 log.go:172] (0xc0008c5ea0) (5) Data frame sent\nI0626 01:00:12.159058 3481 log.go:172] (0xc0007aa210) Data frame received for 5\nI0626 01:00:12.159069 3481 log.go:172] (0xc0008c5ea0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 32479\nConnection to 172.17.0.12 32479 port [tcp/32479] succeeded!\nI0626 01:00:12.159101 3481 log.go:172] (0xc0007aa210) Data frame received for 3\nI0626 01:00:12.159122 3481 log.go:172] (0xc0008c5400) (3) Data frame handling\nI0626 01:00:12.160723 3481 log.go:172] (0xc0007aa210) Data frame received for 1\nI0626 01:00:12.160741 3481 log.go:172] (0xc0008c4e60) (1) Data frame handling\nI0626 01:00:12.160751 3481 log.go:172] (0xc0008c4e60) (1) Data frame sent\nI0626 01:00:12.160761 3481 log.go:172] (0xc0007aa210) (0xc0008c4e60) Stream removed, broadcasting: 1\nI0626 01:00:12.160846 3481 log.go:172] (0xc0007aa210) Go away received\nI0626 01:00:12.161060 3481 log.go:172] (0xc0007aa210) (0xc0008c4e60) Stream removed, broadcasting: 1\nI0626 01:00:12.161082 3481 log.go:172] (0xc0007aa210) (0xc0008c5400) Stream removed, broadcasting: 3\nI0626 01:00:12.161101 3481 log.go:172] (0xc0007aa210) (0xc0008c5ea0) Stream removed, broadcasting: 5\n" Jun 26 01:00:12.167: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:00:12.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4549" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:813 • [SLOW TEST:12.760 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":294,"completed":250,"skipped":3962,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:00:12.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0626 01:00:13.321568 8 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 26 01:00:13.321: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:00:13.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7775" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":294,"completed":251,"skipped":4002,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:00:13.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 26 01:02:13.544: INFO: Deleting pod "var-expansion-5ce84e69-6f28-442f-83cd-edfd4c3ac54b" in namespace "var-expansion-4777" Jun 26 01:02:13.549: INFO: Wait up to 5m0s for pod "var-expansion-5ce84e69-6f28-442f-83cd-edfd4c3ac54b" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:02:17.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4777" for this suite. • [SLOW TEST:124.295 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":294,"completed":252,"skipped":4002,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:02:17.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 26 01:02:18.450: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 26 01:02:20.535: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728730138, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728730138, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728730138, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728730138, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 26 01:02:23.572: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:02:23.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1444" for this suite. STEP: Destroying namespace "webhook-1444-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.239 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":294,"completed":253,"skipped":4008,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:02:23.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 26 01:02:23.926: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jun 26 01:02:25.853: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-606 create -f -' Jun 26 01:02:29.061: INFO: stderr: "" Jun 26 01:02:29.061: INFO: stdout: "e2e-test-crd-publish-openapi-8605-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jun 26 01:02:29.061: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-606 delete e2e-test-crd-publish-openapi-8605-crds test-cr' Jun 26 01:02:29.185: INFO: stderr: "" Jun 26 01:02:29.185: INFO: stdout: "e2e-test-crd-publish-openapi-8605-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Jun 26 01:02:29.185: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-606 apply -f -' Jun 26 01:02:29.480: INFO: stderr: "" Jun 26 01:02:29.480: INFO: stdout: "e2e-test-crd-publish-openapi-8605-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jun 26 01:02:29.480: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-606 delete e2e-test-crd-publish-openapi-8605-crds test-cr' Jun 26 01:02:29.603: INFO: stderr: "" Jun 26 01:02:29.603: INFO: stdout: "e2e-test-crd-publish-openapi-8605-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Jun 26 01:02:29.603: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8605-crds' Jun 26 01:02:29.867: INFO: stderr: "" Jun 26 01:02:29.867: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8605-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:02:32.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-606" for this suite. • [SLOW TEST:8.894 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":294,"completed":254,"skipped":4048,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} S ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:02:32.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 26 01:02:32.840: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jun 26 01:02:32.856: INFO: Pod name sample-pod: Found 0 pods out of 1 Jun 26 01:02:37.859: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 26 01:02:37.859: INFO: Creating deployment "test-rolling-update-deployment" Jun 26 01:02:37.864: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jun 26 01:02:37.887: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jun 26 01:02:39.895: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jun 26 01:02:39.898: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728730157, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728730157, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728730158, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728730157, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-df7bb669b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 26 01:02:41.902: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Jun 26 01:02:41.912: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-9705 /apis/apps/v1/namespaces/deployment-9705/deployments/test-rolling-update-deployment c8a63256-7cf2-4149-b207-45f808e5dd47 15930578 1 2020-06-26 01:02:37 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-06-26 01:02:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-06-26 01:02:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005f915d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-06-26 01:02:37 +0000 UTC,LastTransitionTime:2020-06-26 01:02:37 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-df7bb669b" has successfully progressed.,LastUpdateTime:2020-06-26 01:02:41 +0000 UTC,LastTransitionTime:2020-06-26 01:02:37 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jun 26 01:02:41.916: INFO: New ReplicaSet "test-rolling-update-deployment-df7bb669b" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-df7bb669b deployment-9705 /apis/apps/v1/namespaces/deployment-9705/replicasets/test-rolling-update-deployment-df7bb669b 018a85e2-03d4-499e-bcd5-87055b015d00 15930567 1 2020-06-26 01:02:37 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment c8a63256-7cf2-4149-b207-45f808e5dd47 0xc005f91b70 0xc005f91b71}] [] [{kube-controller-manager Update apps/v1 2020-06-26 01:02:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c8a63256-7cf2-4149-b207-45f808e5dd47\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: df7bb669b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005f91bf8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jun 26 01:02:41.916: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jun 26 01:02:41.916: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-9705 /apis/apps/v1/namespaces/deployment-9705/replicasets/test-rolling-update-controller 38ccf8b6-0717-42ce-bc96-16b97ff63d77 15930576 2 2020-06-26 01:02:32 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment c8a63256-7cf2-4149-b207-45f808e5dd47 0xc005f91a27 0xc005f91a28}] [] [{e2e.test Update apps/v1 2020-06-26 01:02:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-06-26 01:02:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c8a63256-7cf2-4149-b207-45f808e5dd47\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005f91b08 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 26 01:02:41.920: INFO: Pod "test-rolling-update-deployment-df7bb669b-478r7" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-df7bb669b-478r7 test-rolling-update-deployment-df7bb669b- deployment-9705 /api/v1/namespaces/deployment-9705/pods/test-rolling-update-deployment-df7bb669b-478r7 108454b1-ec28-4364-b42e-7ad09303ce9b 15930566 0 2020-06-26 01:02:37 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-df7bb669b 018a85e2-03d4-499e-bcd5-87055b015d00 0xc00606c160 0xc00606c161}] [] [{kube-controller-manager Update v1 2020-06-26 01:02:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"018a85e2-03d4-499e-bcd5-87055b015d00\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-26 01:02:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.238\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tvwsm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tvwsm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tvwsm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 01:02:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 01:02:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 01:02:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-26 01:02:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.238,StartTime:2020-06-26 01:02:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-26 01:02:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://bf874c90a5531722a5bebb7013467c25afa0a69ad935e2b0ee1075c1be66102b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.238,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:02:41.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9705" for this suite. • [SLOW TEST:9.169 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":294,"completed":255,"skipped":4049,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:02:41.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-rpxq STEP: Creating a pod to test atomic-volume-subpath Jun 26 01:02:42.115: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-rpxq" in namespace "subpath-991" to be "Succeeded or Failed" Jun 26 01:02:42.150: INFO: Pod "pod-subpath-test-secret-rpxq": Phase="Pending", Reason="", readiness=false. Elapsed: 34.995251ms Jun 26 01:02:44.155: INFO: Pod "pod-subpath-test-secret-rpxq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040055392s Jun 26 01:02:46.160: INFO: Pod "pod-subpath-test-secret-rpxq": Phase="Running", Reason="", readiness=true. Elapsed: 4.04421178s Jun 26 01:02:48.164: INFO: Pod "pod-subpath-test-secret-rpxq": Phase="Running", Reason="", readiness=true. Elapsed: 6.048933326s Jun 26 01:02:50.169: INFO: Pod "pod-subpath-test-secret-rpxq": Phase="Running", Reason="", readiness=true. Elapsed: 8.053954885s Jun 26 01:02:52.174: INFO: Pod "pod-subpath-test-secret-rpxq": Phase="Running", Reason="", readiness=true. Elapsed: 10.05875172s Jun 26 01:02:54.178: INFO: Pod "pod-subpath-test-secret-rpxq": Phase="Running", Reason="", readiness=true. Elapsed: 12.062346485s Jun 26 01:02:56.182: INFO: Pod "pod-subpath-test-secret-rpxq": Phase="Running", Reason="", readiness=true. Elapsed: 14.066651774s Jun 26 01:02:58.186: INFO: Pod "pod-subpath-test-secret-rpxq": Phase="Running", Reason="", readiness=true. Elapsed: 16.070705993s Jun 26 01:03:00.191: INFO: Pod "pod-subpath-test-secret-rpxq": Phase="Running", Reason="", readiness=true. Elapsed: 18.075517657s Jun 26 01:03:02.195: INFO: Pod "pod-subpath-test-secret-rpxq": Phase="Running", Reason="", readiness=true. Elapsed: 20.079675198s Jun 26 01:03:04.199: INFO: Pod "pod-subpath-test-secret-rpxq": Phase="Running", Reason="", readiness=true. Elapsed: 22.083888885s Jun 26 01:03:06.203: INFO: Pod "pod-subpath-test-secret-rpxq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.088021618s STEP: Saw pod success Jun 26 01:03:06.203: INFO: Pod "pod-subpath-test-secret-rpxq" satisfied condition "Succeeded or Failed" Jun 26 01:03:06.208: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-secret-rpxq container test-container-subpath-secret-rpxq: STEP: delete the pod Jun 26 01:03:06.306: INFO: Waiting for pod pod-subpath-test-secret-rpxq to disappear Jun 26 01:03:06.342: INFO: Pod pod-subpath-test-secret-rpxq no longer exists STEP: Deleting pod pod-subpath-test-secret-rpxq Jun 26 01:03:06.342: INFO: Deleting pod "pod-subpath-test-secret-rpxq" in namespace "subpath-991" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:03:06.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-991" for this suite. • [SLOW TEST:24.427 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":294,"completed":256,"skipped":4049,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:03:06.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 26 01:03:06.987: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 26 01:03:08.998: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728730187, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728730187, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728730187, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728730186, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 26 01:03:12.034: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:03:12.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6147" for this suite. STEP: Destroying namespace "webhook-6147-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.909 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":294,"completed":257,"skipped":4123,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:03:12.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:809 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-6620 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-6620 I0626 01:03:12.774483 8 runners.go:190] Created replication controller with name: externalname-service, namespace: services-6620, replica count: 2 I0626 01:03:15.824939 8 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0626 01:03:18.825432 8 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 26 01:03:18.825: INFO: Creating new exec pod Jun 26 01:03:23.838: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6620 execpodmvwjc -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jun 26 01:03:24.091: INFO: stderr: "I0626 01:03:23.992390 3613 log.go:172] (0xc00003b600) (0xc00087ff40) Create stream\nI0626 01:03:23.992442 3613 log.go:172] (0xc00003b600) (0xc00087ff40) Stream added, broadcasting: 1\nI0626 01:03:23.994247 3613 log.go:172] (0xc00003b600) Reply frame received for 1\nI0626 01:03:23.994286 3613 log.go:172] (0xc00003b600) (0xc000878780) Create stream\nI0626 01:03:23.994298 3613 log.go:172] (0xc00003b600) (0xc000878780) Stream added, broadcasting: 3\nI0626 01:03:23.995198 3613 log.go:172] (0xc00003b600) Reply frame received for 3\nI0626 01:03:23.995230 3613 log.go:172] (0xc00003b600) (0xc00086e780) Create stream\nI0626 01:03:23.995240 3613 log.go:172] (0xc00003b600) (0xc00086e780) Stream added, broadcasting: 5\nI0626 01:03:23.995992 3613 log.go:172] (0xc00003b600) Reply frame received for 5\nI0626 01:03:24.084189 3613 log.go:172] (0xc00003b600) Data frame received for 3\nI0626 01:03:24.084242 3613 log.go:172] (0xc000878780) (3) Data frame handling\nI0626 01:03:24.084290 3613 log.go:172] (0xc00003b600) Data frame received for 5\nI0626 01:03:24.084332 3613 log.go:172] (0xc00086e780) (5) Data frame handling\nI0626 01:03:24.084365 3613 log.go:172] (0xc00086e780) (5) Data frame sent\nI0626 01:03:24.084393 3613 log.go:172] (0xc00003b600) Data frame received for 5\nI0626 01:03:24.084410 3613 log.go:172] (0xc00086e780) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0626 01:03:24.085819 3613 log.go:172] (0xc00003b600) Data frame received for 1\nI0626 01:03:24.085835 3613 log.go:172] (0xc00087ff40) (1) Data frame handling\nI0626 01:03:24.085845 3613 log.go:172] (0xc00087ff40) (1) Data frame sent\nI0626 01:03:24.085856 3613 log.go:172] (0xc00003b600) (0xc00087ff40) Stream removed, broadcasting: 1\nI0626 01:03:24.085913 3613 log.go:172] (0xc00003b600) Go away received\nI0626 01:03:24.086108 3613 log.go:172] (0xc00003b600) (0xc00087ff40) Stream removed, broadcasting: 1\nI0626 01:03:24.086131 3613 log.go:172] (0xc00003b600) (0xc000878780) Stream removed, broadcasting: 3\nI0626 01:03:24.086140 3613 log.go:172] (0xc00003b600) (0xc00086e780) Stream removed, broadcasting: 5\n" Jun 26 01:03:24.091: INFO: stdout: "" Jun 26 01:03:24.091: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6620 execpodmvwjc -- /bin/sh -x -c nc -zv -t -w 2 10.97.97.249 80' Jun 26 01:03:24.288: INFO: stderr: "I0626 01:03:24.204611 3634 log.go:172] (0xc000abd810) (0xc00084b860) Create stream\nI0626 01:03:24.204672 3634 log.go:172] (0xc000abd810) (0xc00084b860) Stream added, broadcasting: 1\nI0626 01:03:24.207162 3634 log.go:172] (0xc000abd810) Reply frame received for 1\nI0626 01:03:24.207197 3634 log.go:172] (0xc000abd810) (0xc000854320) Create stream\nI0626 01:03:24.207208 3634 log.go:172] (0xc000abd810) (0xc000854320) Stream added, broadcasting: 3\nI0626 01:03:24.208285 3634 log.go:172] (0xc000abd810) Reply frame received for 3\nI0626 01:03:24.208317 3634 log.go:172] (0xc000abd810) (0xc000854c80) Create stream\nI0626 01:03:24.208333 3634 log.go:172] (0xc000abd810) (0xc000854c80) Stream added, broadcasting: 5\nI0626 01:03:24.209546 3634 log.go:172] (0xc000abd810) Reply frame received for 5\nI0626 01:03:24.281344 3634 log.go:172] (0xc000abd810) Data frame received for 5\nI0626 01:03:24.281468 3634 log.go:172] (0xc000854c80) (5) Data frame handling\nI0626 01:03:24.281491 3634 log.go:172] (0xc000854c80) (5) Data frame sent\nI0626 01:03:24.281510 3634 log.go:172] (0xc000abd810) Data frame received for 5\nI0626 01:03:24.281519 3634 log.go:172] (0xc000854c80) (5) Data frame handling\n+ nc -zv -t -w 2 10.97.97.249 80\nConnection to 10.97.97.249 80 port [tcp/http] succeeded!\nI0626 01:03:24.281555 3634 log.go:172] (0xc000abd810) Data frame received for 3\nI0626 01:03:24.281573 3634 log.go:172] (0xc000854320) (3) Data frame handling\nI0626 01:03:24.282691 3634 log.go:172] (0xc000abd810) Data frame received for 1\nI0626 01:03:24.282730 3634 log.go:172] (0xc00084b860) (1) Data frame handling\nI0626 01:03:24.282749 3634 log.go:172] (0xc00084b860) (1) Data frame sent\nI0626 01:03:24.282766 3634 log.go:172] (0xc000abd810) (0xc00084b860) Stream removed, broadcasting: 1\nI0626 01:03:24.282809 3634 log.go:172] (0xc000abd810) Go away received\nI0626 01:03:24.283253 3634 log.go:172] (0xc000abd810) (0xc00084b860) Stream removed, broadcasting: 1\nI0626 01:03:24.283272 3634 log.go:172] (0xc000abd810) (0xc000854320) Stream removed, broadcasting: 3\nI0626 01:03:24.283280 3634 log.go:172] (0xc000abd810) (0xc000854c80) Stream removed, broadcasting: 5\n" Jun 26 01:03:24.288: INFO: stdout: "" Jun 26 01:03:24.288: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:03:24.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6620" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:813 • [SLOW TEST:12.104 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":294,"completed":258,"skipped":4155,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:03:24.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1398 STEP: creating an pod Jun 26 01:03:24.417: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 --namespace=kubectl-1440 -- logs-generator --log-lines-total 100 --run-duration 20s' Jun 26 01:03:24.539: INFO: stderr: "" Jun 26 01:03:24.540: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. Jun 26 01:03:24.540: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Jun 26 01:03:24.540: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-1440" to be "running and ready, or succeeded" Jun 26 01:03:24.575: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 35.840192ms Jun 26 01:03:26.594: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054585648s Jun 26 01:03:28.599: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.059013568s Jun 26 01:03:28.599: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Jun 26 01:03:28.599: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Jun 26 01:03:28.599: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1440' Jun 26 01:03:28.725: INFO: stderr: "" Jun 26 01:03:28.725: INFO: stdout: "I0626 01:03:26.999841 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/wnq 350\nI0626 01:03:27.199983 1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/qdx 392\nI0626 01:03:27.399995 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/rgs 308\nI0626 01:03:27.599984 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/4xr 214\nI0626 01:03:27.799998 1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/qqx 288\nI0626 01:03:28.000054 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/ptbw 334\nI0626 01:03:28.200045 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/4z7z 435\nI0626 01:03:28.400042 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/244 569\nI0626 01:03:28.600087 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/9sj 383\n" STEP: limiting log lines Jun 26 01:03:28.725: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1440 --tail=1' Jun 26 01:03:28.854: INFO: stderr: "" Jun 26 01:03:28.854: INFO: stdout: "I0626 01:03:28.800038 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/h5k 376\n" Jun 26 01:03:28.854: INFO: got output "I0626 01:03:28.800038 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/h5k 376\n" STEP: limiting log bytes Jun 26 01:03:28.854: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1440 --limit-bytes=1' Jun 26 01:03:28.975: INFO: stderr: "" Jun 26 01:03:28.975: INFO: stdout: "I" Jun 26 01:03:28.975: INFO: got output "I" STEP: exposing timestamps Jun 26 01:03:28.975: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1440 --tail=1 --timestamps' Jun 26 01:03:29.095: INFO: stderr: "" Jun 26 01:03:29.095: INFO: stdout: "2020-06-26T01:03:29.000138523Z I0626 01:03:29.000004 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/bpz 388\n" Jun 26 01:03:29.095: INFO: got output "2020-06-26T01:03:29.000138523Z I0626 01:03:29.000004 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/bpz 388\n" STEP: restricting to a time range Jun 26 01:03:31.595: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1440 --since=1s' Jun 26 01:03:31.747: INFO: stderr: "" Jun 26 01:03:31.747: INFO: stdout: "I0626 01:03:30.800045 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/xcq 583\nI0626 01:03:31.000026 1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/9kp 337\nI0626 01:03:31.200048 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/g45g 496\nI0626 01:03:31.400024 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/vvbh 566\nI0626 01:03:31.600036 1 logs_generator.go:76] 23 GET /api/v1/namespaces/default/pods/hkxd 299\n" Jun 26 01:03:31.747: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1440 --since=24h' Jun 26 01:03:31.858: INFO: stderr: "" Jun 26 01:03:31.858: INFO: stdout: "I0626 01:03:26.999841 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/wnq 350\nI0626 01:03:27.199983 1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/qdx 392\nI0626 01:03:27.399995 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/rgs 308\nI0626 01:03:27.599984 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/4xr 214\nI0626 01:03:27.799998 1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/qqx 288\nI0626 01:03:28.000054 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/ptbw 334\nI0626 01:03:28.200045 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/4z7z 435\nI0626 01:03:28.400042 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/244 569\nI0626 01:03:28.600087 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/9sj 383\nI0626 01:03:28.800038 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/h5k 376\nI0626 01:03:29.000004 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/bpz 388\nI0626 01:03:29.200053 1 logs_generator.go:76] 11 POST /api/v1/namespaces/ns/pods/sjz 477\nI0626 01:03:29.400003 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/ns/pods/bzp 433\nI0626 01:03:29.599991 1 logs_generator.go:76] 13 GET /api/v1/namespaces/kube-system/pods/mjs 533\nI0626 01:03:29.800036 1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/ztpp 471\nI0626 01:03:29.999990 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/kube-system/pods/29zh 247\nI0626 01:03:30.200007 1 logs_generator.go:76] 16 POST /api/v1/namespaces/default/pods/9cl5 534\nI0626 01:03:30.400128 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/6zw 517\nI0626 01:03:30.600011 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/skpg 310\nI0626 01:03:30.800045 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/xcq 583\nI0626 01:03:31.000026 1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/9kp 337\nI0626 01:03:31.200048 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/g45g 496\nI0626 01:03:31.400024 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/vvbh 566\nI0626 01:03:31.600036 1 logs_generator.go:76] 23 GET /api/v1/namespaces/default/pods/hkxd 299\nI0626 01:03:31.799983 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/default/pods/qfhx 264\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Jun 26 01:03:31.858: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-1440' Jun 26 01:03:45.278: INFO: stderr: "" Jun 26 01:03:45.278: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:03:45.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1440" for this suite. • [SLOW TEST:20.916 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1394 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":294,"completed":259,"skipped":4204,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:03:45.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if kubectl can dry-run update Pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jun 26 01:03:45.347: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-833' Jun 26 01:03:45.475: INFO: stderr: "" Jun 26 01:03:45.475: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Jun 26 01:03:45.475: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod -o json --namespace=kubectl-833' Jun 26 01:03:45.600: INFO: stderr: "" Jun 26 01:03:45.600: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-06-26T01:03:45Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-06-26T01:03:45Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-06-26T01:03:45Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-833\",\n \"resourceVersion\": \"15930993\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-833/pods/e2e-test-httpd-pod\",\n \"uid\": \"ec618c1a-6640-4eb7-848b-54bb7110ea12\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-dghvj\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-dghvj\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-dghvj\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-26T01:03:45Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-26T01:03:45Z\",\n \"message\": \"containers with unready status: [e2e-test-httpd-pod]\",\n \"reason\": \"ContainersNotReady\",\n \"status\": \"False\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-26T01:03:45Z\",\n \"message\": \"containers with unready status: [e2e-test-httpd-pod]\",\n \"reason\": \"ContainersNotReady\",\n \"status\": \"False\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-26T01:03:45Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": false,\n \"restartCount\": 0,\n \"started\": false,\n \"state\": {\n \"waiting\": {\n \"reason\": \"ContainerCreating\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.12\",\n \"phase\": \"Pending\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-06-26T01:03:45Z\"\n }\n}\n" Jun 26 01:03:45.601: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config replace -f - --dry-run server --namespace=kubectl-833' Jun 26 01:03:45.954: INFO: stderr: "W0626 01:03:45.677197 3857 helpers.go:553] --dry-run is deprecated and can be replaced with --dry-run=client.\n" Jun 26 01:03:45.954: INFO: stdout: "pod/e2e-test-httpd-pod replaced (dry run)\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/httpd:2.4.38-alpine Jun 26 01:03:45.957: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-833' Jun 26 01:03:47.996: INFO: stderr: "" Jun 26 01:03:47.996: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:03:47.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-833" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":294,"completed":260,"skipped":4211,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:03:48.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Jun 26 01:03:54.223: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-9178 PodName:pod-sharedvolume-35b7c927-7642-453e-92f3-63bac232d567 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 01:03:54.223: INFO: >>> kubeConfig: /root/.kube/config I0626 01:03:54.246230 8 log.go:172] (0xc0010de2c0) (0xc0028bdd60) Create stream I0626 01:03:54.246257 8 log.go:172] (0xc0010de2c0) (0xc0028bdd60) Stream added, broadcasting: 1 I0626 01:03:54.247861 8 log.go:172] (0xc0010de2c0) Reply frame received for 1 I0626 01:03:54.247925 8 log.go:172] (0xc0010de2c0) (0xc00290e8c0) Create stream I0626 01:03:54.247945 8 log.go:172] (0xc0010de2c0) (0xc00290e8c0) Stream added, broadcasting: 3 I0626 01:03:54.248702 8 log.go:172] (0xc0010de2c0) Reply frame received for 3 I0626 01:03:54.248737 8 log.go:172] (0xc0010de2c0) (0xc00122e000) Create stream I0626 01:03:54.248748 8 log.go:172] (0xc0010de2c0) (0xc00122e000) Stream added, broadcasting: 5 I0626 01:03:54.249580 8 log.go:172] (0xc0010de2c0) Reply frame received for 5 I0626 01:03:54.336700 8 log.go:172] (0xc0010de2c0) Data frame received for 3 I0626 01:03:54.336735 8 log.go:172] (0xc0010de2c0) Data frame received for 5 I0626 01:03:54.336766 8 log.go:172] (0xc00122e000) (5) Data frame handling I0626 01:03:54.336796 8 log.go:172] (0xc00290e8c0) (3) Data frame handling I0626 01:03:54.336816 8 log.go:172] (0xc00290e8c0) (3) Data frame sent I0626 01:03:54.336826 8 log.go:172] (0xc0010de2c0) Data frame received for 3 I0626 01:03:54.336839 8 log.go:172] (0xc00290e8c0) (3) Data frame handling I0626 01:03:54.338252 8 log.go:172] (0xc0010de2c0) Data frame received for 1 I0626 01:03:54.338278 8 log.go:172] (0xc0028bdd60) (1) Data frame handling I0626 01:03:54.338291 8 log.go:172] (0xc0028bdd60) (1) Data frame sent I0626 01:03:54.338302 8 log.go:172] (0xc0010de2c0) (0xc0028bdd60) Stream removed, broadcasting: 1 I0626 01:03:54.338318 8 log.go:172] (0xc0010de2c0) Go away received I0626 01:03:54.338465 8 log.go:172] (0xc0010de2c0) (0xc0028bdd60) Stream removed, broadcasting: 1 I0626 01:03:54.338485 8 log.go:172] (0xc0010de2c0) (0xc00290e8c0) Stream removed, broadcasting: 3 I0626 01:03:54.338496 8 log.go:172] (0xc0010de2c0) (0xc00122e000) Stream removed, broadcasting: 5 Jun 26 01:03:54.338: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:03:54.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9178" for this suite. • [SLOW TEST:6.343 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":294,"completed":261,"skipped":4215,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:03:54.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 26 01:04:02.489: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 26 01:04:02.534: INFO: Pod pod-with-poststart-http-hook still exists Jun 26 01:04:04.534: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 26 01:04:04.539: INFO: Pod pod-with-poststart-http-hook still exists Jun 26 01:04:06.535: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 26 01:04:06.540: INFO: Pod pod-with-poststart-http-hook still exists Jun 26 01:04:08.535: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 26 01:04:08.540: INFO: Pod pod-with-poststart-http-hook still exists Jun 26 01:04:10.535: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 26 01:04:10.540: INFO: Pod pod-with-poststart-http-hook still exists Jun 26 01:04:12.535: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 26 01:04:12.540: INFO: Pod pod-with-poststart-http-hook still exists Jun 26 01:04:14.535: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 26 01:04:14.539: INFO: Pod pod-with-poststart-http-hook still exists Jun 26 01:04:16.535: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 26 01:04:16.539: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:04:16.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5934" for this suite. • [SLOW TEST:22.202 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":294,"completed":262,"skipped":4223,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:04:16.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-daed753b-cfed-452d-a642-50e8f12eeba8 STEP: Creating a pod to test consume secrets Jun 26 01:04:16.628: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d44ece0b-7c18-4334-802b-1f27027ea2e9" in namespace "projected-1739" to be "Succeeded or Failed" Jun 26 01:04:16.631: INFO: Pod "pod-projected-secrets-d44ece0b-7c18-4334-802b-1f27027ea2e9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.205992ms Jun 26 01:04:18.636: INFO: Pod "pod-projected-secrets-d44ece0b-7c18-4334-802b-1f27027ea2e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007738722s Jun 26 01:04:20.640: INFO: Pod "pod-projected-secrets-d44ece0b-7c18-4334-802b-1f27027ea2e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01182751s STEP: Saw pod success Jun 26 01:04:20.640: INFO: Pod "pod-projected-secrets-d44ece0b-7c18-4334-802b-1f27027ea2e9" satisfied condition "Succeeded or Failed" Jun 26 01:04:20.643: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-d44ece0b-7c18-4334-802b-1f27027ea2e9 container projected-secret-volume-test: STEP: delete the pod Jun 26 01:04:20.971: INFO: Waiting for pod pod-projected-secrets-d44ece0b-7c18-4334-802b-1f27027ea2e9 to disappear Jun 26 01:04:21.022: INFO: Pod pod-projected-secrets-d44ece0b-7c18-4334-802b-1f27027ea2e9 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:04:21.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1739" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":294,"completed":263,"skipped":4247,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:04:21.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5810.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-5810.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5810.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5810.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-5810.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5810.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 26 01:04:27.371: INFO: DNS probes using dns-5810/dns-test-2bd1a7aa-081b-4d57-bd6d-ef0508d11109 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:04:27.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5810" for this suite. • [SLOW TEST:6.399 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":294,"completed":264,"skipped":4266,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:04:27.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-7b2e43a9-972b-4848-aa79-f566c28e2a1e STEP: Creating a pod to test consume configMaps Jun 26 01:04:27.890: INFO: Waiting up to 5m0s for pod "pod-configmaps-ab451439-3e91-4b35-97bb-37db92384dec" in namespace "configmap-4562" to be "Succeeded or Failed" Jun 26 01:04:28.044: INFO: Pod "pod-configmaps-ab451439-3e91-4b35-97bb-37db92384dec": Phase="Pending", Reason="", readiness=false. Elapsed: 153.550319ms Jun 26 01:04:30.048: INFO: Pod "pod-configmaps-ab451439-3e91-4b35-97bb-37db92384dec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.157851371s Jun 26 01:04:32.058: INFO: Pod "pod-configmaps-ab451439-3e91-4b35-97bb-37db92384dec": Phase="Running", Reason="", readiness=true. Elapsed: 4.168091668s Jun 26 01:04:34.063: INFO: Pod "pod-configmaps-ab451439-3e91-4b35-97bb-37db92384dec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.172589351s STEP: Saw pod success Jun 26 01:04:34.063: INFO: Pod "pod-configmaps-ab451439-3e91-4b35-97bb-37db92384dec" satisfied condition "Succeeded or Failed" Jun 26 01:04:34.066: INFO: Trying to get logs from node latest-worker pod pod-configmaps-ab451439-3e91-4b35-97bb-37db92384dec container configmap-volume-test: STEP: delete the pod Jun 26 01:04:34.142: INFO: Waiting for pod pod-configmaps-ab451439-3e91-4b35-97bb-37db92384dec to disappear Jun 26 01:04:34.147: INFO: Pod pod-configmaps-ab451439-3e91-4b35-97bb-37db92384dec no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:04:34.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4562" for this suite. • [SLOW TEST:6.602 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":294,"completed":265,"skipped":4266,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:04:34.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-4638 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-4638 Jun 26 01:04:34.297: INFO: Found 0 stateful pods, waiting for 1 Jun 26 01:04:44.301: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jun 26 01:04:44.358: INFO: Deleting all statefulset in ns statefulset-4638 Jun 26 01:04:44.435: INFO: Scaling statefulset ss to 0 Jun 26 01:05:04.476: INFO: Waiting for statefulset status.replicas updated to 0 Jun 26 01:05:04.480: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:05:04.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4638" for this suite. • [SLOW TEST:30.376 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":294,"completed":266,"skipped":4283,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:05:04.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-e2187cc9-618a-4c95-84a1-b992ca322d80 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:05:04.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3273" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":294,"completed":267,"skipped":4287,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:05:04.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-bd0b79dc-61d7-4613-b6f5-b804c400a675 STEP: Creating a pod to test consume configMaps Jun 26 01:05:04.732: INFO: Waiting up to 5m0s for pod "pod-configmaps-f0d6e34b-5d31-4dcb-86fb-482e9fa987c4" in namespace "configmap-1509" to be "Succeeded or Failed" Jun 26 01:05:04.777: INFO: Pod "pod-configmaps-f0d6e34b-5d31-4dcb-86fb-482e9fa987c4": Phase="Pending", Reason="", readiness=false. Elapsed: 44.976215ms Jun 26 01:05:07.056: INFO: Pod "pod-configmaps-f0d6e34b-5d31-4dcb-86fb-482e9fa987c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.323538604s Jun 26 01:05:09.060: INFO: Pod "pod-configmaps-f0d6e34b-5d31-4dcb-86fb-482e9fa987c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.327495566s STEP: Saw pod success Jun 26 01:05:09.060: INFO: Pod "pod-configmaps-f0d6e34b-5d31-4dcb-86fb-482e9fa987c4" satisfied condition "Succeeded or Failed" Jun 26 01:05:09.063: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-f0d6e34b-5d31-4dcb-86fb-482e9fa987c4 container configmap-volume-test: STEP: delete the pod Jun 26 01:05:09.379: INFO: Waiting for pod pod-configmaps-f0d6e34b-5d31-4dcb-86fb-482e9fa987c4 to disappear Jun 26 01:05:09.381: INFO: Pod pod-configmaps-f0d6e34b-5d31-4dcb-86fb-482e9fa987c4 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:05:09.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1509" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":268,"skipped":4364,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:05:09.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:05:09.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8662" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":294,"completed":269,"skipped":4370,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:05:09.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:05:09.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6624" for this suite. STEP: Destroying namespace "nspatchtest-05ae2163-a99a-49ac-9f38-816474c64e9f-9711" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":294,"completed":270,"skipped":4373,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:05:09.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 26 01:05:18.011: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 26 01:05:18.074: INFO: Pod pod-with-poststart-exec-hook still exists Jun 26 01:05:20.075: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 26 01:05:20.079: INFO: Pod pod-with-poststart-exec-hook still exists Jun 26 01:05:22.074: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 26 01:05:22.079: INFO: Pod pod-with-poststart-exec-hook still exists Jun 26 01:05:24.075: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 26 01:05:24.078: INFO: Pod pod-with-poststart-exec-hook still exists Jun 26 01:05:26.074: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 26 01:05:26.079: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:05:26.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6218" for this suite. • [SLOW TEST:16.299 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":294,"completed":271,"skipped":4375,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:05:26.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:85 Jun 26 01:05:26.196: INFO: Waiting up to 1m0s for all nodes to be ready Jun 26 01:06:26.225: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:06:26.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:484 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Jun 26 01:06:30.393: INFO: found a healthy node: latest-worker2 [It] runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 26 01:06:48.890: INFO: pods created so far: [1 1 1] Jun 26 01:06:48.890: INFO: length of pods created so far: 3 Jun 26 01:07:00.899: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:07:07.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-5026" for this suite. [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:456 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:07:07.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-3175" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:75 • [SLOW TEST:101.972 seconds] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:445 runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":294,"completed":272,"skipped":4399,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:07:08.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 26 01:07:08.139: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jun 26 01:07:10.191: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:07:11.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6054" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":294,"completed":273,"skipped":4406,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:07:11.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1528 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jun 26 01:07:11.816: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-8477' Jun 26 01:07:12.414: INFO: stderr: "" Jun 26 01:07:12.414: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1533 Jun 26 01:07:12.537: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-8477' Jun 26 01:07:24.870: INFO: stderr: "" Jun 26 01:07:24.870: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:07:24.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8477" for this suite. • [SLOW TEST:13.460 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1524 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":294,"completed":274,"skipped":4452,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:07:24.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 26 01:07:29.507: INFO: Successfully updated pod "pod-update-activedeadlineseconds-e5b63db7-a364-4b6e-b502-fbfd46d1e9f5" Jun 26 01:07:29.507: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-e5b63db7-a364-4b6e-b502-fbfd46d1e9f5" in namespace "pods-5957" to be "terminated due to deadline exceeded" Jun 26 01:07:29.527: INFO: Pod "pod-update-activedeadlineseconds-e5b63db7-a364-4b6e-b502-fbfd46d1e9f5": Phase="Running", Reason="", readiness=true. Elapsed: 19.26973ms Jun 26 01:07:31.530: INFO: Pod "pod-update-activedeadlineseconds-e5b63db7-a364-4b6e-b502-fbfd46d1e9f5": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.022811674s Jun 26 01:07:31.530: INFO: Pod "pod-update-activedeadlineseconds-e5b63db7-a364-4b6e-b502-fbfd46d1e9f5" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:07:31.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5957" for this suite. • [SLOW TEST:6.658 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":294,"completed":275,"skipped":4456,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:07:31.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jun 26 01:07:31.873: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-981 /api/v1/namespaces/watch-981/configmaps/e2e-watch-test-watch-closed f924dc43-d1de-4e0b-81e4-56cde0d42cca 15932329 0 2020-06-26 01:07:31 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-06-26 01:07:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jun 26 01:07:31.873: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-981 /api/v1/namespaces/watch-981/configmaps/e2e-watch-test-watch-closed f924dc43-d1de-4e0b-81e4-56cde0d42cca 15932330 0 2020-06-26 01:07:31 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-06-26 01:07:31 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jun 26 01:07:31.944: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-981 /api/v1/namespaces/watch-981/configmaps/e2e-watch-test-watch-closed f924dc43-d1de-4e0b-81e4-56cde0d42cca 15932331 0 2020-06-26 01:07:31 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-06-26 01:07:31 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 26 01:07:31.944: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-981 /api/v1/namespaces/watch-981/configmaps/e2e-watch-test-watch-closed f924dc43-d1de-4e0b-81e4-56cde0d42cca 15932333 0 2020-06-26 01:07:31 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-06-26 01:07:31 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:07:31.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-981" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":294,"completed":276,"skipped":4517,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:07:31.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:809 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-9850 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9850 to expose endpoints map[] Jun 26 01:07:32.218: INFO: Get endpoints failed (57.555557ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jun 26 01:07:33.222: INFO: successfully validated that service multi-endpoint-test in namespace services-9850 exposes endpoints map[] (1.061153543s elapsed) STEP: Creating pod pod1 in namespace services-9850 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9850 to expose endpoints map[pod1:[100]] Jun 26 01:07:37.261: INFO: successfully validated that service multi-endpoint-test in namespace services-9850 exposes endpoints map[pod1:[100]] (4.033392023s elapsed) STEP: Creating pod pod2 in namespace services-9850 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9850 to expose endpoints map[pod1:[100] pod2:[101]] Jun 26 01:07:40.367: INFO: successfully validated that service multi-endpoint-test in namespace services-9850 exposes endpoints map[pod1:[100] pod2:[101]] (3.102082475s elapsed) STEP: Deleting pod pod1 in namespace services-9850 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9850 to expose endpoints map[pod2:[101]] Jun 26 01:07:41.446: INFO: successfully validated that service multi-endpoint-test in namespace services-9850 exposes endpoints map[pod2:[101]] (1.073911154s elapsed) STEP: Deleting pod pod2 in namespace services-9850 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9850 to expose endpoints map[] Jun 26 01:07:42.488: INFO: successfully validated that service multi-endpoint-test in namespace services-9850 exposes endpoints map[] (1.035267051s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:07:42.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9850" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:813 • [SLOW TEST:10.598 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":294,"completed":277,"skipped":4519,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:07:42.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-2382 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Jun 26 01:07:42.931: INFO: Found 0 stateful pods, waiting for 3 Jun 26 01:07:52.937: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 26 01:07:52.937: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 26 01:07:52.937: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jun 26 01:08:02.937: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 26 01:08:02.937: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 26 01:08:02.937: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jun 26 01:08:02.948: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 26 01:08:03.191: INFO: stderr: "I0626 01:08:03.079945 3941 log.go:172] (0xc000a134a0) (0xc000882be0) Create stream\nI0626 01:08:03.080009 3941 log.go:172] (0xc000a134a0) (0xc000882be0) Stream added, broadcasting: 1\nI0626 01:08:03.085383 3941 log.go:172] (0xc000a134a0) Reply frame received for 1\nI0626 01:08:03.085435 3941 log.go:172] (0xc000a134a0) (0xc00087b7c0) Create stream\nI0626 01:08:03.085449 3941 log.go:172] (0xc000a134a0) (0xc00087b7c0) Stream added, broadcasting: 3\nI0626 01:08:03.086585 3941 log.go:172] (0xc000a134a0) Reply frame received for 3\nI0626 01:08:03.086610 3941 log.go:172] (0xc000a134a0) (0xc00086e820) Create stream\nI0626 01:08:03.086617 3941 log.go:172] (0xc000a134a0) (0xc00086e820) Stream added, broadcasting: 5\nI0626 01:08:03.087535 3941 log.go:172] (0xc000a134a0) Reply frame received for 5\nI0626 01:08:03.157665 3941 log.go:172] (0xc000a134a0) Data frame received for 5\nI0626 01:08:03.157712 3941 log.go:172] (0xc00086e820) (5) Data frame handling\nI0626 01:08:03.157739 3941 log.go:172] (0xc00086e820) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0626 01:08:03.182441 3941 log.go:172] (0xc000a134a0) Data frame received for 3\nI0626 01:08:03.182479 3941 log.go:172] (0xc00087b7c0) (3) Data frame handling\nI0626 01:08:03.182506 3941 log.go:172] (0xc00087b7c0) (3) Data frame sent\nI0626 01:08:03.182992 3941 log.go:172] (0xc000a134a0) Data frame received for 3\nI0626 01:08:03.183037 3941 log.go:172] (0xc00087b7c0) (3) Data frame handling\nI0626 01:08:03.183153 3941 log.go:172] (0xc000a134a0) Data frame received for 5\nI0626 01:08:03.183232 3941 log.go:172] (0xc00086e820) (5) Data frame handling\nI0626 01:08:03.185601 3941 log.go:172] (0xc000a134a0) Data frame received for 1\nI0626 01:08:03.185625 3941 log.go:172] (0xc000882be0) (1) Data frame handling\nI0626 01:08:03.185644 3941 log.go:172] (0xc000882be0) (1) Data frame sent\nI0626 01:08:03.185661 3941 log.go:172] (0xc000a134a0) (0xc000882be0) Stream removed, broadcasting: 1\nI0626 01:08:03.185679 3941 log.go:172] (0xc000a134a0) Go away received\nI0626 01:08:03.186107 3941 log.go:172] (0xc000a134a0) (0xc000882be0) Stream removed, broadcasting: 1\nI0626 01:08:03.186130 3941 log.go:172] (0xc000a134a0) (0xc00087b7c0) Stream removed, broadcasting: 3\nI0626 01:08:03.186144 3941 log.go:172] (0xc000a134a0) (0xc00086e820) Stream removed, broadcasting: 5\n" Jun 26 01:08:03.192: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 26 01:08:03.192: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jun 26 01:08:13.226: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jun 26 01:08:23.281: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 01:08:23.515: INFO: stderr: "I0626 01:08:23.423873 3963 log.go:172] (0xc0009b93f0) (0xc0009ec140) Create stream\nI0626 01:08:23.423922 3963 log.go:172] (0xc0009b93f0) (0xc0009ec140) Stream added, broadcasting: 1\nI0626 01:08:23.428916 3963 log.go:172] (0xc0009b93f0) Reply frame received for 1\nI0626 01:08:23.428951 3963 log.go:172] (0xc0009b93f0) (0xc000854c80) Create stream\nI0626 01:08:23.428961 3963 log.go:172] (0xc0009b93f0) (0xc000854c80) Stream added, broadcasting: 3\nI0626 01:08:23.430102 3963 log.go:172] (0xc0009b93f0) Reply frame received for 3\nI0626 01:08:23.430144 3963 log.go:172] (0xc0009b93f0) (0xc000846500) Create stream\nI0626 01:08:23.430158 3963 log.go:172] (0xc0009b93f0) (0xc000846500) Stream added, broadcasting: 5\nI0626 01:08:23.431237 3963 log.go:172] (0xc0009b93f0) Reply frame received for 5\nI0626 01:08:23.506106 3963 log.go:172] (0xc0009b93f0) Data frame received for 3\nI0626 01:08:23.506150 3963 log.go:172] (0xc000854c80) (3) Data frame handling\nI0626 01:08:23.506174 3963 log.go:172] (0xc000854c80) (3) Data frame sent\nI0626 01:08:23.506190 3963 log.go:172] (0xc0009b93f0) Data frame received for 3\nI0626 01:08:23.506202 3963 log.go:172] (0xc000854c80) (3) Data frame handling\nI0626 01:08:23.506352 3963 log.go:172] (0xc0009b93f0) Data frame received for 5\nI0626 01:08:23.506399 3963 log.go:172] (0xc000846500) (5) Data frame handling\nI0626 01:08:23.506425 3963 log.go:172] (0xc000846500) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0626 01:08:23.506548 3963 log.go:172] (0xc0009b93f0) Data frame received for 5\nI0626 01:08:23.506576 3963 log.go:172] (0xc000846500) (5) Data frame handling\nI0626 01:08:23.508315 3963 log.go:172] (0xc0009b93f0) Data frame received for 1\nI0626 01:08:23.508348 3963 log.go:172] (0xc0009ec140) (1) Data frame handling\nI0626 01:08:23.508519 3963 log.go:172] (0xc0009ec140) (1) Data frame sent\nI0626 01:08:23.508538 3963 log.go:172] (0xc0009b93f0) (0xc0009ec140) Stream removed, broadcasting: 1\nI0626 01:08:23.508553 3963 log.go:172] (0xc0009b93f0) Go away received\nI0626 01:08:23.508938 3963 log.go:172] (0xc0009b93f0) (0xc0009ec140) Stream removed, broadcasting: 1\nI0626 01:08:23.508967 3963 log.go:172] (0xc0009b93f0) (0xc000854c80) Stream removed, broadcasting: 3\nI0626 01:08:23.508978 3963 log.go:172] (0xc0009b93f0) (0xc000846500) Stream removed, broadcasting: 5\n" Jun 26 01:08:23.515: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 26 01:08:23.515: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 26 01:08:33.539: INFO: Waiting for StatefulSet statefulset-2382/ss2 to complete update Jun 26 01:08:33.539: INFO: Waiting for Pod statefulset-2382/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jun 26 01:08:33.539: INFO: Waiting for Pod statefulset-2382/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jun 26 01:08:33.539: INFO: Waiting for Pod statefulset-2382/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jun 26 01:08:43.547: INFO: Waiting for StatefulSet statefulset-2382/ss2 to complete update Jun 26 01:08:43.547: INFO: Waiting for Pod statefulset-2382/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jun 26 01:08:53.548: INFO: Waiting for StatefulSet statefulset-2382/ss2 to complete update Jun 26 01:08:53.548: INFO: Waiting for Pod statefulset-2382/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Jun 26 01:09:03.548: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 26 01:09:03.858: INFO: stderr: "I0626 01:09:03.682253 3983 log.go:172] (0xc00003a420) (0xc00013d900) Create stream\nI0626 01:09:03.682309 3983 log.go:172] (0xc00003a420) (0xc00013d900) Stream added, broadcasting: 1\nI0626 01:09:03.689473 3983 log.go:172] (0xc00003a420) Reply frame received for 1\nI0626 01:09:03.689575 3983 log.go:172] (0xc00003a420) (0xc0002ae6e0) Create stream\nI0626 01:09:03.689605 3983 log.go:172] (0xc00003a420) (0xc0002ae6e0) Stream added, broadcasting: 3\nI0626 01:09:03.693478 3983 log.go:172] (0xc00003a420) Reply frame received for 3\nI0626 01:09:03.693530 3983 log.go:172] (0xc00003a420) (0xc00035f4a0) Create stream\nI0626 01:09:03.693545 3983 log.go:172] (0xc00003a420) (0xc00035f4a0) Stream added, broadcasting: 5\nI0626 01:09:03.701470 3983 log.go:172] (0xc00003a420) Reply frame received for 5\nI0626 01:09:03.786193 3983 log.go:172] (0xc00003a420) Data frame received for 5\nI0626 01:09:03.786225 3983 log.go:172] (0xc00035f4a0) (5) Data frame handling\nI0626 01:09:03.786254 3983 log.go:172] (0xc00035f4a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0626 01:09:03.847966 3983 log.go:172] (0xc00003a420) Data frame received for 3\nI0626 01:09:03.848012 3983 log.go:172] (0xc0002ae6e0) (3) Data frame handling\nI0626 01:09:03.848145 3983 log.go:172] (0xc0002ae6e0) (3) Data frame sent\nI0626 01:09:03.848398 3983 log.go:172] (0xc00003a420) Data frame received for 5\nI0626 01:09:03.848431 3983 log.go:172] (0xc00035f4a0) (5) Data frame handling\nI0626 01:09:03.848563 3983 log.go:172] (0xc00003a420) Data frame received for 3\nI0626 01:09:03.848586 3983 log.go:172] (0xc0002ae6e0) (3) Data frame handling\nI0626 01:09:03.850081 3983 log.go:172] (0xc00003a420) Data frame received for 1\nI0626 01:09:03.850112 3983 log.go:172] (0xc00013d900) (1) Data frame handling\nI0626 01:09:03.850128 3983 log.go:172] (0xc00013d900) (1) Data frame sent\nI0626 01:09:03.850146 3983 log.go:172] (0xc00003a420) (0xc00013d900) Stream removed, broadcasting: 1\nI0626 01:09:03.850168 3983 log.go:172] (0xc00003a420) Go away received\nI0626 01:09:03.850563 3983 log.go:172] (0xc00003a420) (0xc00013d900) Stream removed, broadcasting: 1\nI0626 01:09:03.850596 3983 log.go:172] (0xc00003a420) (0xc0002ae6e0) Stream removed, broadcasting: 3\nI0626 01:09:03.850607 3983 log.go:172] (0xc00003a420) (0xc00035f4a0) Stream removed, broadcasting: 5\n" Jun 26 01:09:03.858: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 26 01:09:03.858: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 26 01:09:13.891: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jun 26 01:09:23.945: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2382 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 01:09:24.186: INFO: stderr: "I0626 01:09:24.102440 4005 log.go:172] (0xc000b05ce0) (0xc0007161e0) Create stream\nI0626 01:09:24.102521 4005 log.go:172] (0xc000b05ce0) (0xc0007161e0) Stream added, broadcasting: 1\nI0626 01:09:24.104956 4005 log.go:172] (0xc000b05ce0) Reply frame received for 1\nI0626 01:09:24.105350 4005 log.go:172] (0xc000b05ce0) (0xc000724be0) Create stream\nI0626 01:09:24.105374 4005 log.go:172] (0xc000b05ce0) (0xc000724be0) Stream added, broadcasting: 3\nI0626 01:09:24.106124 4005 log.go:172] (0xc000b05ce0) Reply frame received for 3\nI0626 01:09:24.106163 4005 log.go:172] (0xc000b05ce0) (0xc000716b40) Create stream\nI0626 01:09:24.106182 4005 log.go:172] (0xc000b05ce0) (0xc000716b40) Stream added, broadcasting: 5\nI0626 01:09:24.106914 4005 log.go:172] (0xc000b05ce0) Reply frame received for 5\nI0626 01:09:24.177304 4005 log.go:172] (0xc000b05ce0) Data frame received for 5\nI0626 01:09:24.177362 4005 log.go:172] (0xc000716b40) (5) Data frame handling\nI0626 01:09:24.177387 4005 log.go:172] (0xc000716b40) (5) Data frame sent\nI0626 01:09:24.177409 4005 log.go:172] (0xc000b05ce0) Data frame received for 5\nI0626 01:09:24.177424 4005 log.go:172] (0xc000716b40) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0626 01:09:24.177445 4005 log.go:172] (0xc000b05ce0) Data frame received for 3\nI0626 01:09:24.177473 4005 log.go:172] (0xc000724be0) (3) Data frame handling\nI0626 01:09:24.177503 4005 log.go:172] (0xc000724be0) (3) Data frame sent\nI0626 01:09:24.177526 4005 log.go:172] (0xc000b05ce0) Data frame received for 3\nI0626 01:09:24.177536 4005 log.go:172] (0xc000724be0) (3) Data frame handling\nI0626 01:09:24.178654 4005 log.go:172] (0xc000b05ce0) Data frame received for 1\nI0626 01:09:24.178671 4005 log.go:172] (0xc0007161e0) (1) Data frame handling\nI0626 01:09:24.178684 4005 log.go:172] (0xc0007161e0) (1) Data frame sent\nI0626 01:09:24.178702 4005 log.go:172] (0xc000b05ce0) (0xc0007161e0) Stream removed, broadcasting: 1\nI0626 01:09:24.178825 4005 log.go:172] (0xc000b05ce0) Go away received\nI0626 01:09:24.178997 4005 log.go:172] (0xc000b05ce0) (0xc0007161e0) Stream removed, broadcasting: 1\nI0626 01:09:24.179014 4005 log.go:172] (0xc000b05ce0) (0xc000724be0) Stream removed, broadcasting: 3\nI0626 01:09:24.179024 4005 log.go:172] (0xc000b05ce0) (0xc000716b40) Stream removed, broadcasting: 5\n" Jun 26 01:09:24.186: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 26 01:09:24.186: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 26 01:09:34.206: INFO: Waiting for StatefulSet statefulset-2382/ss2 to complete update Jun 26 01:09:34.206: INFO: Waiting for Pod statefulset-2382/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jun 26 01:09:34.206: INFO: Waiting for Pod statefulset-2382/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jun 26 01:09:34.206: INFO: Waiting for Pod statefulset-2382/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jun 26 01:09:44.215: INFO: Waiting for StatefulSet statefulset-2382/ss2 to complete update Jun 26 01:09:44.215: INFO: Waiting for Pod statefulset-2382/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jun 26 01:09:54.214: INFO: Waiting for StatefulSet statefulset-2382/ss2 to complete update Jun 26 01:09:54.214: INFO: Waiting for Pod statefulset-2382/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jun 26 01:10:04.214: INFO: Deleting all statefulset in ns statefulset-2382 Jun 26 01:10:04.217: INFO: Scaling statefulset ss2 to 0 Jun 26 01:10:34.234: INFO: Waiting for statefulset status.replicas updated to 0 Jun 26 01:10:34.237: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:10:34.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2382" for this suite. • [SLOW TEST:171.781 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":294,"completed":278,"skipped":4541,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:10:34.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jun 26 01:10:34.456: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-4860 /api/v1/namespaces/watch-4860/configmaps/e2e-watch-test-resource-version 75c5c971-fefc-4500-a760-47864039e6c3 15933294 0 2020-06-26 01:10:34 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-06-26 01:10:34 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 26 01:10:34.456: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-4860 /api/v1/namespaces/watch-4860/configmaps/e2e-watch-test-resource-version 75c5c971-fefc-4500-a760-47864039e6c3 15933295 0 2020-06-26 01:10:34 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-06-26 01:10:34 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:10:34.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4860" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":294,"completed":279,"skipped":4562,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:10:34.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-8655ac07-be36-4805-a973-88530206f9e4 STEP: Creating a pod to test consume configMaps Jun 26 01:10:34.579: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bb88ceec-c8e8-48a6-b319-89275e91bf0d" in namespace "projected-5221" to be "Succeeded or Failed" Jun 26 01:10:34.600: INFO: Pod "pod-projected-configmaps-bb88ceec-c8e8-48a6-b319-89275e91bf0d": Phase="Pending", Reason="", readiness=false. Elapsed: 21.29859ms Jun 26 01:10:36.604: INFO: Pod "pod-projected-configmaps-bb88ceec-c8e8-48a6-b319-89275e91bf0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025520383s Jun 26 01:10:38.608: INFO: Pod "pod-projected-configmaps-bb88ceec-c8e8-48a6-b319-89275e91bf0d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029774614s Jun 26 01:10:40.612: INFO: Pod "pod-projected-configmaps-bb88ceec-c8e8-48a6-b319-89275e91bf0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.033244438s STEP: Saw pod success Jun 26 01:10:40.612: INFO: Pod "pod-projected-configmaps-bb88ceec-c8e8-48a6-b319-89275e91bf0d" satisfied condition "Succeeded or Failed" Jun 26 01:10:40.614: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-bb88ceec-c8e8-48a6-b319-89275e91bf0d container projected-configmap-volume-test: STEP: delete the pod Jun 26 01:10:40.706: INFO: Waiting for pod pod-projected-configmaps-bb88ceec-c8e8-48a6-b319-89275e91bf0d to disappear Jun 26 01:10:40.720: INFO: Pod pod-projected-configmaps-bb88ceec-c8e8-48a6-b319-89275e91bf0d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:10:40.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5221" for this suite. • [SLOW TEST:6.264 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":294,"completed":280,"skipped":4587,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:10:40.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-5375/configmap-test-b81adf93-3601-431b-a55e-8c172c1a7682 STEP: Creating a pod to test consume configMaps Jun 26 01:10:40.801: INFO: Waiting up to 5m0s for pod "pod-configmaps-cc16f1d2-7c74-4c6b-971c-4e2c4e638265" in namespace "configmap-5375" to be "Succeeded or Failed" Jun 26 01:10:40.848: INFO: Pod "pod-configmaps-cc16f1d2-7c74-4c6b-971c-4e2c4e638265": Phase="Pending", Reason="", readiness=false. Elapsed: 47.083138ms Jun 26 01:10:42.853: INFO: Pod "pod-configmaps-cc16f1d2-7c74-4c6b-971c-4e2c4e638265": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051863623s Jun 26 01:10:44.857: INFO: Pod "pod-configmaps-cc16f1d2-7c74-4c6b-971c-4e2c4e638265": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055763357s STEP: Saw pod success Jun 26 01:10:44.857: INFO: Pod "pod-configmaps-cc16f1d2-7c74-4c6b-971c-4e2c4e638265" satisfied condition "Succeeded or Failed" Jun 26 01:10:44.860: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-cc16f1d2-7c74-4c6b-971c-4e2c4e638265 container env-test: STEP: delete the pod Jun 26 01:10:45.043: INFO: Waiting for pod pod-configmaps-cc16f1d2-7c74-4c6b-971c-4e2c4e638265 to disappear Jun 26 01:10:45.056: INFO: Pod pod-configmaps-cc16f1d2-7c74-4c6b-971c-4e2c4e638265 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:10:45.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5375" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":294,"completed":281,"skipped":4591,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:10:45.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5368 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5368;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5368 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5368;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5368.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5368.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5368.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5368.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5368.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5368.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5368.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5368.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5368.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5368.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5368.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5368.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5368.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 181.248.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.248.181_udp@PTR;check="$$(dig +tcp +noall +answer +search 181.248.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.248.181_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5368 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5368;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5368 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5368;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5368.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5368.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5368.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5368.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5368.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5368.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5368.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5368.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5368.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5368.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5368.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5368.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5368.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 181.248.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.248.181_udp@PTR;check="$$(dig +tcp +noall +answer +search 181.248.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.248.181_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 26 01:10:51.525: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:10:51.530: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:10:51.533: INFO: Unable to read wheezy_udp@dns-test-service.dns-5368 from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:10:51.536: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5368 from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:10:51.540: INFO: Unable to read wheezy_udp@dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:10:51.543: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:10:51.547: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:10:51.550: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:10:51.573: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:10:51.576: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:10:51.579: INFO: Unable to read jessie_udp@dns-test-service.dns-5368 from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:10:51.583: INFO: Unable to read jessie_tcp@dns-test-service.dns-5368 from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:10:51.586: INFO: Unable to read jessie_udp@dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:10:51.590: INFO: Unable to read jessie_tcp@dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:10:51.593: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:10:51.596: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:10:51.618: INFO: Lookups using dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5368 wheezy_tcp@dns-test-service.dns-5368 wheezy_udp@dns-test-service.dns-5368.svc wheezy_tcp@dns-test-service.dns-5368.svc wheezy_udp@_http._tcp.dns-test-service.dns-5368.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5368.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5368 jessie_tcp@dns-test-service.dns-5368 jessie_udp@dns-test-service.dns-5368.svc jessie_tcp@dns-test-service.dns-5368.svc jessie_udp@_http._tcp.dns-test-service.dns-5368.svc jessie_tcp@_http._tcp.dns-test-service.dns-5368.svc] Jun 26 01:10:56.624: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:10:56.628: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:10:56.631: INFO: Unable to read wheezy_udp@dns-test-service.dns-5368 from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:10:56.635: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5368 from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:10:56.638: INFO: Unable to read wheezy_udp@dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:10:56.642: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:10:56.645: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:10:56.649: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:10:56.673: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:10:56.676: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:10:56.679: INFO: Unable to read jessie_udp@dns-test-service.dns-5368 from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:10:56.683: INFO: Unable to read jessie_tcp@dns-test-service.dns-5368 from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:10:56.686: INFO: Unable to read jessie_udp@dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:10:56.689: INFO: Unable to read jessie_tcp@dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:10:56.692: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:10:56.695: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:10:56.714: INFO: Lookups using dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5368 wheezy_tcp@dns-test-service.dns-5368 wheezy_udp@dns-test-service.dns-5368.svc wheezy_tcp@dns-test-service.dns-5368.svc wheezy_udp@_http._tcp.dns-test-service.dns-5368.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5368.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5368 jessie_tcp@dns-test-service.dns-5368 jessie_udp@dns-test-service.dns-5368.svc jessie_tcp@dns-test-service.dns-5368.svc jessie_udp@_http._tcp.dns-test-service.dns-5368.svc jessie_tcp@_http._tcp.dns-test-service.dns-5368.svc] Jun 26 01:11:01.622: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:01.625: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:01.629: INFO: Unable to read wheezy_udp@dns-test-service.dns-5368 from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:01.632: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5368 from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:01.636: INFO: Unable to read wheezy_udp@dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:01.639: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:01.642: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:01.645: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:01.669: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:01.672: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:01.675: INFO: Unable to read jessie_udp@dns-test-service.dns-5368 from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:01.678: INFO: Unable to read jessie_tcp@dns-test-service.dns-5368 from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:01.683: INFO: Unable to read jessie_udp@dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:01.686: INFO: Unable to read jessie_tcp@dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:01.690: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:01.694: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:01.732: INFO: Lookups using dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5368 wheezy_tcp@dns-test-service.dns-5368 wheezy_udp@dns-test-service.dns-5368.svc wheezy_tcp@dns-test-service.dns-5368.svc wheezy_udp@_http._tcp.dns-test-service.dns-5368.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5368.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5368 jessie_tcp@dns-test-service.dns-5368 jessie_udp@dns-test-service.dns-5368.svc jessie_tcp@dns-test-service.dns-5368.svc jessie_udp@_http._tcp.dns-test-service.dns-5368.svc jessie_tcp@_http._tcp.dns-test-service.dns-5368.svc] Jun 26 01:11:06.622: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:06.626: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:06.630: INFO: Unable to read wheezy_udp@dns-test-service.dns-5368 from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:06.633: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5368 from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:06.637: INFO: Unable to read wheezy_udp@dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:06.640: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:06.644: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:06.648: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:06.672: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:06.676: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:06.679: INFO: Unable to read jessie_udp@dns-test-service.dns-5368 from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:06.683: INFO: Unable to read jessie_tcp@dns-test-service.dns-5368 from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:06.687: INFO: Unable to read jessie_udp@dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:06.690: INFO: Unable to read jessie_tcp@dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:06.693: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:06.697: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:06.716: INFO: Lookups using dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5368 wheezy_tcp@dns-test-service.dns-5368 wheezy_udp@dns-test-service.dns-5368.svc wheezy_tcp@dns-test-service.dns-5368.svc wheezy_udp@_http._tcp.dns-test-service.dns-5368.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5368.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5368 jessie_tcp@dns-test-service.dns-5368 jessie_udp@dns-test-service.dns-5368.svc jessie_tcp@dns-test-service.dns-5368.svc jessie_udp@_http._tcp.dns-test-service.dns-5368.svc jessie_tcp@_http._tcp.dns-test-service.dns-5368.svc] Jun 26 01:11:11.623: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:11.626: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:11.630: INFO: Unable to read wheezy_udp@dns-test-service.dns-5368 from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:11.633: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5368 from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:11.637: INFO: Unable to read wheezy_udp@dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:11.640: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:11.643: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:11.646: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:11.671: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:11.674: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:11.677: INFO: Unable to read jessie_udp@dns-test-service.dns-5368 from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:11.680: INFO: Unable to read jessie_tcp@dns-test-service.dns-5368 from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:11.683: INFO: Unable to read jessie_udp@dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:11.687: INFO: Unable to read jessie_tcp@dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:11.690: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:11.694: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:11.713: INFO: Lookups using dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5368 wheezy_tcp@dns-test-service.dns-5368 wheezy_udp@dns-test-service.dns-5368.svc wheezy_tcp@dns-test-service.dns-5368.svc wheezy_udp@_http._tcp.dns-test-service.dns-5368.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5368.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5368 jessie_tcp@dns-test-service.dns-5368 jessie_udp@dns-test-service.dns-5368.svc jessie_tcp@dns-test-service.dns-5368.svc jessie_udp@_http._tcp.dns-test-service.dns-5368.svc jessie_tcp@_http._tcp.dns-test-service.dns-5368.svc] Jun 26 01:11:16.623: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:16.627: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:16.630: INFO: Unable to read wheezy_udp@dns-test-service.dns-5368 from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:16.634: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5368 from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:16.636: INFO: Unable to read wheezy_udp@dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:16.639: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:16.642: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:16.645: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:16.678: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:16.682: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:16.686: INFO: Unable to read jessie_udp@dns-test-service.dns-5368 from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:16.689: INFO: Unable to read jessie_tcp@dns-test-service.dns-5368 from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:16.692: INFO: Unable to read jessie_udp@dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:16.696: INFO: Unable to read jessie_tcp@dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:16.699: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:16.703: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5368.svc from pod dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb: the server could not find the requested resource (get pods dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb) Jun 26 01:11:16.724: INFO: Lookups using dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5368 wheezy_tcp@dns-test-service.dns-5368 wheezy_udp@dns-test-service.dns-5368.svc wheezy_tcp@dns-test-service.dns-5368.svc wheezy_udp@_http._tcp.dns-test-service.dns-5368.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5368.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5368 jessie_tcp@dns-test-service.dns-5368 jessie_udp@dns-test-service.dns-5368.svc jessie_tcp@dns-test-service.dns-5368.svc jessie_udp@_http._tcp.dns-test-service.dns-5368.svc jessie_tcp@_http._tcp.dns-test-service.dns-5368.svc] Jun 26 01:11:21.716: INFO: DNS probes using dns-5368/dns-test-b0254aec-4687-4bdd-8e3a-6865b5adb8cb succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:11:22.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5368" for this suite. • [SLOW TEST:37.453 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":294,"completed":282,"skipped":4680,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:11:22.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-4453248c-2ff7-4baf-a6f8-32251dbf62aa STEP: Creating a pod to test consume secrets Jun 26 01:11:22.648: INFO: Waiting up to 5m0s for pod "pod-secrets-6c1933b1-2649-4d63-bc6a-25ab13ebb169" in namespace "secrets-2939" to be "Succeeded or Failed" Jun 26 01:11:22.657: INFO: Pod "pod-secrets-6c1933b1-2649-4d63-bc6a-25ab13ebb169": Phase="Pending", Reason="", readiness=false. Elapsed: 9.565148ms Jun 26 01:11:24.661: INFO: Pod "pod-secrets-6c1933b1-2649-4d63-bc6a-25ab13ebb169": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013509244s Jun 26 01:11:26.681: INFO: Pod "pod-secrets-6c1933b1-2649-4d63-bc6a-25ab13ebb169": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033573689s STEP: Saw pod success Jun 26 01:11:26.681: INFO: Pod "pod-secrets-6c1933b1-2649-4d63-bc6a-25ab13ebb169" satisfied condition "Succeeded or Failed" Jun 26 01:11:26.684: INFO: Trying to get logs from node latest-worker pod pod-secrets-6c1933b1-2649-4d63-bc6a-25ab13ebb169 container secret-volume-test: STEP: delete the pod Jun 26 01:11:26.733: INFO: Waiting for pod pod-secrets-6c1933b1-2649-4d63-bc6a-25ab13ebb169 to disappear Jun 26 01:11:26.744: INFO: Pod pod-secrets-6c1933b1-2649-4d63-bc6a-25ab13ebb169 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:11:26.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2939" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":283,"skipped":4698,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} S ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:11:26.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0626 01:11:28.276611 8 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 26 01:11:28.276: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:11:28.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9788" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":294,"completed":284,"skipped":4699,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:11:28.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-9782 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 26 01:11:28.444: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 26 01:11:28.671: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 26 01:11:30.676: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 26 01:11:32.676: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 26 01:11:34.676: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 26 01:11:36.676: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 26 01:11:38.676: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 26 01:11:40.676: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 26 01:11:42.676: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 26 01:11:44.676: INFO: The status of Pod netserver-0 is Running (Ready = true) Jun 26 01:11:44.680: INFO: The status of Pod netserver-1 is Running (Ready = false) Jun 26 01:11:46.683: INFO: The status of Pod netserver-1 is Running (Ready = false) Jun 26 01:11:48.684: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jun 26 01:11:52.818: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9782 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 01:11:52.818: INFO: >>> kubeConfig: /root/.kube/config I0626 01:11:52.852119 8 log.go:172] (0xc001bec630) (0xc002ada640) Create stream I0626 01:11:52.852148 8 log.go:172] (0xc001bec630) (0xc002ada640) Stream added, broadcasting: 1 I0626 01:11:52.853999 8 log.go:172] (0xc001bec630) Reply frame received for 1 I0626 01:11:52.854028 8 log.go:172] (0xc001bec630) (0xc002ada6e0) Create stream I0626 01:11:52.854038 8 log.go:172] (0xc001bec630) (0xc002ada6e0) Stream added, broadcasting: 3 I0626 01:11:52.854724 8 log.go:172] (0xc001bec630) Reply frame received for 3 I0626 01:11:52.854751 8 log.go:172] (0xc001bec630) (0xc002ada820) Create stream I0626 01:11:52.854761 8 log.go:172] (0xc001bec630) (0xc002ada820) Stream added, broadcasting: 5 I0626 01:11:52.855484 8 log.go:172] (0xc001bec630) Reply frame received for 5 I0626 01:11:52.965077 8 log.go:172] (0xc001bec630) Data frame received for 5 I0626 01:11:52.965273 8 log.go:172] (0xc002ada820) (5) Data frame handling I0626 01:11:52.965314 8 log.go:172] (0xc001bec630) Data frame received for 3 I0626 01:11:52.965331 8 log.go:172] (0xc002ada6e0) (3) Data frame handling I0626 01:11:52.965347 8 log.go:172] (0xc002ada6e0) (3) Data frame sent I0626 01:11:52.965359 8 log.go:172] (0xc001bec630) Data frame received for 3 I0626 01:11:52.965372 8 log.go:172] (0xc002ada6e0) (3) Data frame handling I0626 01:11:52.967460 8 log.go:172] (0xc001bec630) Data frame received for 1 I0626 01:11:52.967499 8 log.go:172] (0xc002ada640) (1) Data frame handling I0626 01:11:52.967532 8 log.go:172] (0xc002ada640) (1) Data frame sent I0626 01:11:52.967561 8 log.go:172] (0xc001bec630) (0xc002ada640) Stream removed, broadcasting: 1 I0626 01:11:52.967590 8 log.go:172] (0xc001bec630) Go away received I0626 01:11:52.967692 8 log.go:172] (0xc001bec630) (0xc002ada640) Stream removed, broadcasting: 1 I0626 01:11:52.967722 8 log.go:172] (0xc001bec630) (0xc002ada6e0) Stream removed, broadcasting: 3 I0626 01:11:52.967760 8 log.go:172] (0xc001bec630) (0xc002ada820) Stream removed, broadcasting: 5 Jun 26 01:11:52.967: INFO: Found all expected endpoints: [netserver-0] Jun 26 01:11:52.971: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.76:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9782 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 26 01:11:52.971: INFO: >>> kubeConfig: /root/.kube/config I0626 01:11:53.004635 8 log.go:172] (0xc004e32420) (0xc002c590e0) Create stream I0626 01:11:53.004663 8 log.go:172] (0xc004e32420) (0xc002c590e0) Stream added, broadcasting: 1 I0626 01:11:53.006893 8 log.go:172] (0xc004e32420) Reply frame received for 1 I0626 01:11:53.006930 8 log.go:172] (0xc004e32420) (0xc0012a4000) Create stream I0626 01:11:53.006945 8 log.go:172] (0xc004e32420) (0xc0012a4000) Stream added, broadcasting: 3 I0626 01:11:53.007824 8 log.go:172] (0xc004e32420) Reply frame received for 3 I0626 01:11:53.007863 8 log.go:172] (0xc004e32420) (0xc0026d6640) Create stream I0626 01:11:53.007876 8 log.go:172] (0xc004e32420) (0xc0026d6640) Stream added, broadcasting: 5 I0626 01:11:53.008917 8 log.go:172] (0xc004e32420) Reply frame received for 5 I0626 01:11:53.097500 8 log.go:172] (0xc004e32420) Data frame received for 3 I0626 01:11:53.097558 8 log.go:172] (0xc0012a4000) (3) Data frame handling I0626 01:11:53.097604 8 log.go:172] (0xc0012a4000) (3) Data frame sent I0626 01:11:53.097818 8 log.go:172] (0xc004e32420) Data frame received for 3 I0626 01:11:53.097849 8 log.go:172] (0xc0012a4000) (3) Data frame handling I0626 01:11:53.097875 8 log.go:172] (0xc004e32420) Data frame received for 5 I0626 01:11:53.097894 8 log.go:172] (0xc0026d6640) (5) Data frame handling I0626 01:11:53.099978 8 log.go:172] (0xc004e32420) Data frame received for 1 I0626 01:11:53.100006 8 log.go:172] (0xc002c590e0) (1) Data frame handling I0626 01:11:53.100022 8 log.go:172] (0xc002c590e0) (1) Data frame sent I0626 01:11:53.100049 8 log.go:172] (0xc004e32420) (0xc002c590e0) Stream removed, broadcasting: 1 I0626 01:11:53.100129 8 log.go:172] (0xc004e32420) Go away received I0626 01:11:53.100185 8 log.go:172] (0xc004e32420) (0xc002c590e0) Stream removed, broadcasting: 1 I0626 01:11:53.100233 8 log.go:172] (0xc004e32420) (0xc0012a4000) Stream removed, broadcasting: 3 I0626 01:11:53.100273 8 log.go:172] (0xc004e32420) (0xc0026d6640) Stream removed, broadcasting: 5 Jun 26 01:11:53.100: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:11:53.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9782" for this suite. • [SLOW TEST:24.826 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":285,"skipped":4713,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} S ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:11:53.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5093.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-5093.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5093.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-5093.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5093.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5093.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-5093.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5093.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-5093.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5093.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 26 01:11:59.520: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:11:59.525: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:11:59.587: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:11:59.597: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:11:59.977: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:11:59.992: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:11:59.998: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:00.001: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:00.008: INFO: Lookups using dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5093.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5093.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local jessie_udp@dns-test-service-2.dns-5093.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5093.svc.cluster.local] Jun 26 01:12:05.014: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:05.018: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:05.022: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:05.025: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:05.036: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:05.039: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:05.042: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:05.045: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:05.074: INFO: Lookups using dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5093.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5093.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local jessie_udp@dns-test-service-2.dns-5093.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5093.svc.cluster.local] Jun 26 01:12:10.014: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:10.018: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:10.022: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:10.026: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:10.036: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:10.040: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:10.043: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:10.047: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:10.054: INFO: Lookups using dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5093.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5093.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local jessie_udp@dns-test-service-2.dns-5093.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5093.svc.cluster.local] Jun 26 01:12:15.012: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:15.016: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:15.019: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:15.022: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:15.031: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:15.034: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:15.037: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:15.040: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:15.045: INFO: Lookups using dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5093.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5093.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local jessie_udp@dns-test-service-2.dns-5093.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5093.svc.cluster.local] Jun 26 01:12:20.013: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:20.017: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:20.020: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:20.024: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:20.031: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:20.034: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:20.037: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:20.040: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:20.047: INFO: Lookups using dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5093.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5093.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local jessie_udp@dns-test-service-2.dns-5093.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5093.svc.cluster.local] Jun 26 01:12:25.024: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:25.027: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:25.031: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:25.034: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:25.044: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:25.047: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:25.050: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:25.053: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5093.svc.cluster.local from pod dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703: the server could not find the requested resource (get pods dns-test-45616d81-c3fb-45f3-810b-f64beede4703) Jun 26 01:12:25.058: INFO: Lookups using dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5093.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5093.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5093.svc.cluster.local jessie_udp@dns-test-service-2.dns-5093.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5093.svc.cluster.local] Jun 26 01:12:30.053: INFO: DNS probes using dns-5093/dns-test-45616d81-c3fb-45f3-810b-f64beede4703 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:12:30.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5093" for this suite. • [SLOW TEST:37.561 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":294,"completed":286,"skipped":4714,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:12:30.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:12:31.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-2405" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":294,"completed":287,"skipped":4737,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:12:31.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 26 01:12:31.156: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-3abe509d-447a-4cc5-91f2-8f6894058b35" in namespace "security-context-test-4224" to be "Succeeded or Failed" Jun 26 01:12:31.183: INFO: Pod "busybox-privileged-false-3abe509d-447a-4cc5-91f2-8f6894058b35": Phase="Pending", Reason="", readiness=false. Elapsed: 26.99729ms Jun 26 01:12:33.233: INFO: Pod "busybox-privileged-false-3abe509d-447a-4cc5-91f2-8f6894058b35": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077070442s Jun 26 01:12:35.251: INFO: Pod "busybox-privileged-false-3abe509d-447a-4cc5-91f2-8f6894058b35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.094984161s Jun 26 01:12:35.251: INFO: Pod "busybox-privileged-false-3abe509d-447a-4cc5-91f2-8f6894058b35" satisfied condition "Succeeded or Failed" Jun 26 01:12:35.268: INFO: Got logs for pod "busybox-privileged-false-3abe509d-447a-4cc5-91f2-8f6894058b35": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:12:35.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4224" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":288,"skipped":4749,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:12:35.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args Jun 26 01:12:35.342: INFO: Waiting up to 5m0s for pod "var-expansion-a28d265b-1082-40ac-b5b6-5afcd82a03e1" in namespace "var-expansion-1335" to be "Succeeded or Failed" Jun 26 01:12:35.401: INFO: Pod "var-expansion-a28d265b-1082-40ac-b5b6-5afcd82a03e1": Phase="Pending", Reason="", readiness=false. Elapsed: 58.640723ms Jun 26 01:12:37.490: INFO: Pod "var-expansion-a28d265b-1082-40ac-b5b6-5afcd82a03e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14816794s Jun 26 01:12:39.493: INFO: Pod "var-expansion-a28d265b-1082-40ac-b5b6-5afcd82a03e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.151109602s STEP: Saw pod success Jun 26 01:12:39.493: INFO: Pod "var-expansion-a28d265b-1082-40ac-b5b6-5afcd82a03e1" satisfied condition "Succeeded or Failed" Jun 26 01:12:39.496: INFO: Trying to get logs from node latest-worker2 pod var-expansion-a28d265b-1082-40ac-b5b6-5afcd82a03e1 container dapi-container: STEP: delete the pod Jun 26 01:12:39.529: INFO: Waiting for pod var-expansion-a28d265b-1082-40ac-b5b6-5afcd82a03e1 to disappear Jun 26 01:12:39.539: INFO: Pod var-expansion-a28d265b-1082-40ac-b5b6-5afcd82a03e1 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:12:39.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1335" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":294,"completed":289,"skipped":4751,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:12:39.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 26 01:12:43.766: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:12:43.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7642" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":294,"completed":290,"skipped":4754,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:12:43.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-e81174cb-4cab-4c0a-b366-b22cdd30bc07 STEP: Creating a pod to test consume configMaps Jun 26 01:12:43.916: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-99741cad-0b1c-4c6d-8bcd-592e46860b05" in namespace "projected-9842" to be "Succeeded or Failed" Jun 26 01:12:43.935: INFO: Pod "pod-projected-configmaps-99741cad-0b1c-4c6d-8bcd-592e46860b05": Phase="Pending", Reason="", readiness=false. Elapsed: 19.312469ms Jun 26 01:12:45.939: INFO: Pod "pod-projected-configmaps-99741cad-0b1c-4c6d-8bcd-592e46860b05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023279531s Jun 26 01:12:47.970: INFO: Pod "pod-projected-configmaps-99741cad-0b1c-4c6d-8bcd-592e46860b05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053496922s STEP: Saw pod success Jun 26 01:12:47.970: INFO: Pod "pod-projected-configmaps-99741cad-0b1c-4c6d-8bcd-592e46860b05" satisfied condition "Succeeded or Failed" Jun 26 01:12:47.972: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-99741cad-0b1c-4c6d-8bcd-592e46860b05 container projected-configmap-volume-test: STEP: delete the pod Jun 26 01:12:48.012: INFO: Waiting for pod pod-projected-configmaps-99741cad-0b1c-4c6d-8bcd-592e46860b05 to disappear Jun 26 01:12:48.022: INFO: Pod pod-projected-configmaps-99741cad-0b1c-4c6d-8bcd-592e46860b05 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:12:48.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9842" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":294,"completed":291,"skipped":4776,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 26 01:12:48.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-6474 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-6474 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6474 Jun 26 01:12:48.158: INFO: Found 0 stateful pods, waiting for 1 Jun 26 01:12:58.185: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jun 26 01:12:58.188: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 26 01:13:01.135: INFO: stderr: "I0626 01:13:00.973846 4025 log.go:172] (0xc00003b550) (0xc000869cc0) Create stream\nI0626 01:13:00.973958 4025 log.go:172] (0xc00003b550) (0xc000869cc0) Stream added, broadcasting: 1\nI0626 01:13:00.975954 4025 log.go:172] (0xc00003b550) Reply frame received for 1\nI0626 01:13:00.976001 4025 log.go:172] (0xc00003b550) (0xc0008548c0) Create stream\nI0626 01:13:00.976018 4025 log.go:172] (0xc00003b550) (0xc0008548c0) Stream added, broadcasting: 3\nI0626 01:13:00.977014 4025 log.go:172] (0xc00003b550) Reply frame received for 3\nI0626 01:13:00.977040 4025 log.go:172] (0xc00003b550) (0xc00084c0a0) Create stream\nI0626 01:13:00.977047 4025 log.go:172] (0xc00003b550) (0xc00084c0a0) Stream added, broadcasting: 5\nI0626 01:13:00.978030 4025 log.go:172] (0xc00003b550) Reply frame received for 5\nI0626 01:13:01.068387 4025 log.go:172] (0xc00003b550) Data frame received for 5\nI0626 01:13:01.068441 4025 log.go:172] (0xc00084c0a0) (5) Data frame handling\nI0626 01:13:01.068480 4025 log.go:172] (0xc00084c0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0626 01:13:01.122775 4025 log.go:172] (0xc00003b550) Data frame received for 3\nI0626 01:13:01.122884 4025 log.go:172] (0xc0008548c0) (3) Data frame handling\nI0626 01:13:01.122980 4025 log.go:172] (0xc00003b550) Data frame received for 5\nI0626 01:13:01.123033 4025 log.go:172] (0xc00084c0a0) (5) Data frame handling\nI0626 01:13:01.123071 4025 log.go:172] (0xc0008548c0) (3) Data frame sent\nI0626 01:13:01.123098 4025 log.go:172] (0xc00003b550) Data frame received for 3\nI0626 01:13:01.123109 4025 log.go:172] (0xc0008548c0) (3) Data frame handling\nI0626 01:13:01.125294 4025 log.go:172] (0xc00003b550) Data frame received for 1\nI0626 01:13:01.125353 4025 log.go:172] (0xc000869cc0) (1) Data frame handling\nI0626 01:13:01.125362 4025 log.go:172] (0xc000869cc0) (1) Data frame sent\nI0626 01:13:01.125376 4025 log.go:172] (0xc00003b550) (0xc000869cc0) Stream removed, broadcasting: 1\nI0626 01:13:01.125405 4025 log.go:172] (0xc00003b550) Go away received\nI0626 01:13:01.125698 4025 log.go:172] (0xc00003b550) (0xc000869cc0) Stream removed, broadcasting: 1\nI0626 01:13:01.125718 4025 log.go:172] (0xc00003b550) (0xc0008548c0) Stream removed, broadcasting: 3\nI0626 01:13:01.125730 4025 log.go:172] (0xc00003b550) (0xc00084c0a0) Stream removed, broadcasting: 5\n" Jun 26 01:13:01.135: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 26 01:13:01.135: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 26 01:13:01.139: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 26 01:13:11.144: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 26 01:13:11.144: INFO: Waiting for statefulset status.replicas updated to 0 Jun 26 01:13:11.179: INFO: POD NODE PHASE GRACE CONDITIONS Jun 26 01:13:11.179: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:12:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:12:48 +0000 UTC }] Jun 26 01:13:11.179: INFO: Jun 26 01:13:11.179: INFO: StatefulSet ss has not reached scale 3, at 1 Jun 26 01:13:12.226: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993683656s Jun 26 01:13:13.396: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.946553063s Jun 26 01:13:14.402: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.776999175s Jun 26 01:13:15.407: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.77131956s Jun 26 01:13:16.413: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.766088558s Jun 26 01:13:17.418: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.75992138s Jun 26 01:13:18.423: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.754590353s Jun 26 01:13:19.428: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.749798235s Jun 26 01:13:20.434: INFO: Verifying statefulset ss doesn't scale past 3 for another 745.102288ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6474 Jun 26 01:13:21.439: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 01:13:21.679: INFO: stderr: "I0626 01:13:21.582477 4058 log.go:172] (0xc000bbf3f0) (0xc000867b80) Create stream\nI0626 01:13:21.582528 4058 log.go:172] (0xc000bbf3f0) (0xc000867b80) Stream added, broadcasting: 1\nI0626 01:13:21.587018 4058 log.go:172] (0xc000bbf3f0) Reply frame received for 1\nI0626 01:13:21.587071 4058 log.go:172] (0xc000bbf3f0) (0xc000370460) Create stream\nI0626 01:13:21.587084 4058 log.go:172] (0xc000bbf3f0) (0xc000370460) Stream added, broadcasting: 3\nI0626 01:13:21.588016 4058 log.go:172] (0xc000bbf3f0) Reply frame received for 3\nI0626 01:13:21.588059 4058 log.go:172] (0xc000bbf3f0) (0xc00085e460) Create stream\nI0626 01:13:21.588071 4058 log.go:172] (0xc000bbf3f0) (0xc00085e460) Stream added, broadcasting: 5\nI0626 01:13:21.588832 4058 log.go:172] (0xc000bbf3f0) Reply frame received for 5\nI0626 01:13:21.671582 4058 log.go:172] (0xc000bbf3f0) Data frame received for 5\nI0626 01:13:21.671635 4058 log.go:172] (0xc00085e460) (5) Data frame handling\nI0626 01:13:21.671653 4058 log.go:172] (0xc00085e460) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0626 01:13:21.671668 4058 log.go:172] (0xc000bbf3f0) Data frame received for 5\nI0626 01:13:21.671715 4058 log.go:172] (0xc00085e460) (5) Data frame handling\nI0626 01:13:21.671763 4058 log.go:172] (0xc000bbf3f0) Data frame received for 3\nI0626 01:13:21.671806 4058 log.go:172] (0xc000370460) (3) Data frame handling\nI0626 01:13:21.671837 4058 log.go:172] (0xc000370460) (3) Data frame sent\nI0626 01:13:21.671870 4058 log.go:172] (0xc000bbf3f0) Data frame received for 3\nI0626 01:13:21.671882 4058 log.go:172] (0xc000370460) (3) Data frame handling\nI0626 01:13:21.673384 4058 log.go:172] (0xc000bbf3f0) Data frame received for 1\nI0626 01:13:21.673411 4058 log.go:172] (0xc000867b80) (1) Data frame handling\nI0626 01:13:21.673427 4058 log.go:172] (0xc000867b80) (1) Data frame sent\nI0626 01:13:21.673614 4058 log.go:172] (0xc000bbf3f0) (0xc000867b80) Stream removed, broadcasting: 1\nI0626 01:13:21.673907 4058 log.go:172] (0xc000bbf3f0) Go away received\nI0626 01:13:21.674063 4058 log.go:172] (0xc000bbf3f0) (0xc000867b80) Stream removed, broadcasting: 1\nI0626 01:13:21.674087 4058 log.go:172] (0xc000bbf3f0) (0xc000370460) Stream removed, broadcasting: 3\nI0626 01:13:21.674107 4058 log.go:172] (0xc000bbf3f0) (0xc00085e460) Stream removed, broadcasting: 5\n" Jun 26 01:13:21.679: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 26 01:13:21.679: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 26 01:13:21.680: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 01:13:21.944: INFO: stderr: "I0626 01:13:21.830072 4080 log.go:172] (0xc000926d10) (0xc000859860) Create stream\nI0626 01:13:21.830189 4080 log.go:172] (0xc000926d10) (0xc000859860) Stream added, broadcasting: 1\nI0626 01:13:21.833817 4080 log.go:172] (0xc000926d10) Reply frame received for 1\nI0626 01:13:21.833862 4080 log.go:172] (0xc000926d10) (0xc00084b360) Create stream\nI0626 01:13:21.833876 4080 log.go:172] (0xc000926d10) (0xc00084b360) Stream added, broadcasting: 3\nI0626 01:13:21.834806 4080 log.go:172] (0xc000926d10) Reply frame received for 3\nI0626 01:13:21.834849 4080 log.go:172] (0xc000926d10) (0xc00075cb40) Create stream\nI0626 01:13:21.834864 4080 log.go:172] (0xc000926d10) (0xc00075cb40) Stream added, broadcasting: 5\nI0626 01:13:21.835637 4080 log.go:172] (0xc000926d10) Reply frame received for 5\nI0626 01:13:21.912393 4080 log.go:172] (0xc000926d10) Data frame received for 5\nI0626 01:13:21.912426 4080 log.go:172] (0xc00075cb40) (5) Data frame handling\nI0626 01:13:21.912447 4080 log.go:172] (0xc00075cb40) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0626 01:13:21.933799 4080 log.go:172] (0xc000926d10) Data frame received for 3\nI0626 01:13:21.933817 4080 log.go:172] (0xc00084b360) (3) Data frame handling\nI0626 01:13:21.933852 4080 log.go:172] (0xc000926d10) Data frame received for 5\nI0626 01:13:21.933889 4080 log.go:172] (0xc00075cb40) (5) Data frame handling\nI0626 01:13:21.933912 4080 log.go:172] (0xc00075cb40) (5) Data frame sent\nI0626 01:13:21.933930 4080 log.go:172] (0xc000926d10) Data frame received for 5\nI0626 01:13:21.933958 4080 log.go:172] (0xc00075cb40) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0626 01:13:21.933995 4080 log.go:172] (0xc00075cb40) (5) Data frame sent\nI0626 01:13:21.934019 4080 log.go:172] (0xc00084b360) (3) Data frame sent\nI0626 01:13:21.934038 4080 log.go:172] (0xc000926d10) Data frame received for 3\nI0626 01:13:21.934060 4080 log.go:172] (0xc00084b360) (3) Data frame handling\nI0626 01:13:21.934515 4080 log.go:172] (0xc000926d10) Data frame received for 5\nI0626 01:13:21.934557 4080 log.go:172] (0xc00075cb40) (5) Data frame handling\nI0626 01:13:21.936193 4080 log.go:172] (0xc000926d10) Data frame received for 1\nI0626 01:13:21.936225 4080 log.go:172] (0xc000859860) (1) Data frame handling\nI0626 01:13:21.936255 4080 log.go:172] (0xc000859860) (1) Data frame sent\nI0626 01:13:21.936356 4080 log.go:172] (0xc000926d10) (0xc000859860) Stream removed, broadcasting: 1\nI0626 01:13:21.936388 4080 log.go:172] (0xc000926d10) Go away received\nI0626 01:13:21.936675 4080 log.go:172] (0xc000926d10) (0xc000859860) Stream removed, broadcasting: 1\nI0626 01:13:21.936689 4080 log.go:172] (0xc000926d10) (0xc00084b360) Stream removed, broadcasting: 3\nI0626 01:13:21.936694 4080 log.go:172] (0xc000926d10) (0xc00075cb40) Stream removed, broadcasting: 5\n" Jun 26 01:13:21.944: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 26 01:13:21.944: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 26 01:13:21.944: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 01:13:22.177: INFO: stderr: "I0626 01:13:22.086903 4102 log.go:172] (0xc0009976b0) (0xc000716460) Create stream\nI0626 01:13:22.086967 4102 log.go:172] (0xc0009976b0) (0xc000716460) Stream added, broadcasting: 1\nI0626 01:13:22.091562 4102 log.go:172] (0xc0009976b0) Reply frame received for 1\nI0626 01:13:22.091610 4102 log.go:172] (0xc0009976b0) (0xc0006eb220) Create stream\nI0626 01:13:22.091627 4102 log.go:172] (0xc0009976b0) (0xc0006eb220) Stream added, broadcasting: 3\nI0626 01:13:22.092901 4102 log.go:172] (0xc0009976b0) Reply frame received for 3\nI0626 01:13:22.092946 4102 log.go:172] (0xc0009976b0) (0xc00065c1e0) Create stream\nI0626 01:13:22.092958 4102 log.go:172] (0xc0009976b0) (0xc00065c1e0) Stream added, broadcasting: 5\nI0626 01:13:22.094075 4102 log.go:172] (0xc0009976b0) Reply frame received for 5\nI0626 01:13:22.168920 4102 log.go:172] (0xc0009976b0) Data frame received for 3\nI0626 01:13:22.168958 4102 log.go:172] (0xc0006eb220) (3) Data frame handling\nI0626 01:13:22.168969 4102 log.go:172] (0xc0006eb220) (3) Data frame sent\nI0626 01:13:22.168976 4102 log.go:172] (0xc0009976b0) Data frame received for 3\nI0626 01:13:22.168982 4102 log.go:172] (0xc0006eb220) (3) Data frame handling\nI0626 01:13:22.169005 4102 log.go:172] (0xc0009976b0) Data frame received for 5\nI0626 01:13:22.169014 4102 log.go:172] (0xc00065c1e0) (5) Data frame handling\nI0626 01:13:22.169031 4102 log.go:172] (0xc00065c1e0) (5) Data frame sent\nI0626 01:13:22.169039 4102 log.go:172] (0xc0009976b0) Data frame received for 5\nI0626 01:13:22.169045 4102 log.go:172] (0xc00065c1e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0626 01:13:22.170333 4102 log.go:172] (0xc0009976b0) Data frame received for 1\nI0626 01:13:22.170355 4102 log.go:172] (0xc000716460) (1) Data frame handling\nI0626 01:13:22.170365 4102 log.go:172] (0xc000716460) (1) Data frame sent\nI0626 01:13:22.170376 4102 log.go:172] (0xc0009976b0) (0xc000716460) Stream removed, broadcasting: 1\nI0626 01:13:22.170387 4102 log.go:172] (0xc0009976b0) Go away received\nI0626 01:13:22.170726 4102 log.go:172] (0xc0009976b0) (0xc000716460) Stream removed, broadcasting: 1\nI0626 01:13:22.170747 4102 log.go:172] (0xc0009976b0) (0xc0006eb220) Stream removed, broadcasting: 3\nI0626 01:13:22.170755 4102 log.go:172] (0xc0009976b0) (0xc00065c1e0) Stream removed, broadcasting: 5\n" Jun 26 01:13:22.177: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 26 01:13:22.177: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 26 01:13:22.223: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 26 01:13:22.223: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 26 01:13:22.223: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jun 26 01:13:22.227: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 26 01:13:22.451: INFO: stderr: "I0626 01:13:22.365979 4123 log.go:172] (0xc000ad38c0) (0xc000b86500) Create stream\nI0626 01:13:22.366044 4123 log.go:172] (0xc000ad38c0) (0xc000b86500) Stream added, broadcasting: 1\nI0626 01:13:22.371154 4123 log.go:172] (0xc000ad38c0) Reply frame received for 1\nI0626 01:13:22.371194 4123 log.go:172] (0xc000ad38c0) (0xc000648aa0) Create stream\nI0626 01:13:22.371203 4123 log.go:172] (0xc000ad38c0) (0xc000648aa0) Stream added, broadcasting: 3\nI0626 01:13:22.372050 4123 log.go:172] (0xc000ad38c0) Reply frame received for 3\nI0626 01:13:22.372099 4123 log.go:172] (0xc000ad38c0) (0xc0006ed040) Create stream\nI0626 01:13:22.372115 4123 log.go:172] (0xc000ad38c0) (0xc0006ed040) Stream added, broadcasting: 5\nI0626 01:13:22.373444 4123 log.go:172] (0xc000ad38c0) Reply frame received for 5\nI0626 01:13:22.443854 4123 log.go:172] (0xc000ad38c0) Data frame received for 5\nI0626 01:13:22.443911 4123 log.go:172] (0xc0006ed040) (5) Data frame handling\nI0626 01:13:22.443931 4123 log.go:172] (0xc0006ed040) (5) Data frame sent\nI0626 01:13:22.443944 4123 log.go:172] (0xc000ad38c0) Data frame received for 5\nI0626 01:13:22.443955 4123 log.go:172] (0xc0006ed040) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0626 01:13:22.444024 4123 log.go:172] (0xc000ad38c0) Data frame received for 3\nI0626 01:13:22.444081 4123 log.go:172] (0xc000648aa0) (3) Data frame handling\nI0626 01:13:22.444109 4123 log.go:172] (0xc000648aa0) (3) Data frame sent\nI0626 01:13:22.444129 4123 log.go:172] (0xc000ad38c0) Data frame received for 3\nI0626 01:13:22.444149 4123 log.go:172] (0xc000648aa0) (3) Data frame handling\nI0626 01:13:22.444998 4123 log.go:172] (0xc000ad38c0) Data frame received for 1\nI0626 01:13:22.445030 4123 log.go:172] (0xc000b86500) (1) Data frame handling\nI0626 01:13:22.445046 4123 log.go:172] (0xc000b86500) (1) Data frame sent\nI0626 01:13:22.445064 4123 log.go:172] (0xc000ad38c0) (0xc000b86500) Stream removed, broadcasting: 1\nI0626 01:13:22.445085 4123 log.go:172] (0xc000ad38c0) Go away received\nI0626 01:13:22.445518 4123 log.go:172] (0xc000ad38c0) (0xc000b86500) Stream removed, broadcasting: 1\nI0626 01:13:22.445535 4123 log.go:172] (0xc000ad38c0) (0xc000648aa0) Stream removed, broadcasting: 3\nI0626 01:13:22.445542 4123 log.go:172] (0xc000ad38c0) (0xc0006ed040) Stream removed, broadcasting: 5\n" Jun 26 01:13:22.452: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 26 01:13:22.452: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 26 01:13:22.452: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 26 01:13:22.719: INFO: stderr: "I0626 01:13:22.587232 4143 log.go:172] (0xc000aed550) (0xc000bc6280) Create stream\nI0626 01:13:22.587280 4143 log.go:172] (0xc000aed550) (0xc000bc6280) Stream added, broadcasting: 1\nI0626 01:13:22.591344 4143 log.go:172] (0xc000aed550) Reply frame received for 1\nI0626 01:13:22.591384 4143 log.go:172] (0xc000aed550) (0xc000730c80) Create stream\nI0626 01:13:22.591402 4143 log.go:172] (0xc000aed550) (0xc000730c80) Stream added, broadcasting: 3\nI0626 01:13:22.592188 4143 log.go:172] (0xc000aed550) Reply frame received for 3\nI0626 01:13:22.592235 4143 log.go:172] (0xc000aed550) (0xc00044c140) Create stream\nI0626 01:13:22.592250 4143 log.go:172] (0xc000aed550) (0xc00044c140) Stream added, broadcasting: 5\nI0626 01:13:22.592946 4143 log.go:172] (0xc000aed550) Reply frame received for 5\nI0626 01:13:22.645814 4143 log.go:172] (0xc000aed550) Data frame received for 5\nI0626 01:13:22.645840 4143 log.go:172] (0xc00044c140) (5) Data frame handling\nI0626 01:13:22.645855 4143 log.go:172] (0xc00044c140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0626 01:13:22.710146 4143 log.go:172] (0xc000aed550) Data frame received for 3\nI0626 01:13:22.710179 4143 log.go:172] (0xc000730c80) (3) Data frame handling\nI0626 01:13:22.710199 4143 log.go:172] (0xc000730c80) (3) Data frame sent\nI0626 01:13:22.710557 4143 log.go:172] (0xc000aed550) Data frame received for 5\nI0626 01:13:22.710581 4143 log.go:172] (0xc00044c140) (5) Data frame handling\nI0626 01:13:22.710604 4143 log.go:172] (0xc000aed550) Data frame received for 3\nI0626 01:13:22.710613 4143 log.go:172] (0xc000730c80) (3) Data frame handling\nI0626 01:13:22.712497 4143 log.go:172] (0xc000aed550) Data frame received for 1\nI0626 01:13:22.712509 4143 log.go:172] (0xc000bc6280) (1) Data frame handling\nI0626 01:13:22.712516 4143 log.go:172] (0xc000bc6280) (1) Data frame sent\nI0626 01:13:22.712594 4143 log.go:172] (0xc000aed550) (0xc000bc6280) Stream removed, broadcasting: 1\nI0626 01:13:22.712646 4143 log.go:172] (0xc000aed550) Go away received\nI0626 01:13:22.713033 4143 log.go:172] (0xc000aed550) (0xc000bc6280) Stream removed, broadcasting: 1\nI0626 01:13:22.713057 4143 log.go:172] (0xc000aed550) (0xc000730c80) Stream removed, broadcasting: 3\nI0626 01:13:22.713069 4143 log.go:172] (0xc000aed550) (0xc00044c140) Stream removed, broadcasting: 5\n" Jun 26 01:13:22.719: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 26 01:13:22.719: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 26 01:13:22.719: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 26 01:13:22.961: INFO: stderr: "I0626 01:13:22.852739 4164 log.go:172] (0xc0007ac000) (0xc0007303c0) Create stream\nI0626 01:13:22.852797 4164 log.go:172] (0xc0007ac000) (0xc0007303c0) Stream added, broadcasting: 1\nI0626 01:13:22.854727 4164 log.go:172] (0xc0007ac000) Reply frame received for 1\nI0626 01:13:22.854760 4164 log.go:172] (0xc0007ac000) (0xc0006580a0) Create stream\nI0626 01:13:22.854769 4164 log.go:172] (0xc0007ac000) (0xc0006580a0) Stream added, broadcasting: 3\nI0626 01:13:22.855427 4164 log.go:172] (0xc0007ac000) Reply frame received for 3\nI0626 01:13:22.855452 4164 log.go:172] (0xc0007ac000) (0xc0005f0d20) Create stream\nI0626 01:13:22.855462 4164 log.go:172] (0xc0007ac000) (0xc0005f0d20) Stream added, broadcasting: 5\nI0626 01:13:22.856094 4164 log.go:172] (0xc0007ac000) Reply frame received for 5\nI0626 01:13:22.927183 4164 log.go:172] (0xc0007ac000) Data frame received for 5\nI0626 01:13:22.927211 4164 log.go:172] (0xc0005f0d20) (5) Data frame handling\nI0626 01:13:22.927350 4164 log.go:172] (0xc0005f0d20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0626 01:13:22.953406 4164 log.go:172] (0xc0007ac000) Data frame received for 3\nI0626 01:13:22.953457 4164 log.go:172] (0xc0006580a0) (3) Data frame handling\nI0626 01:13:22.953481 4164 log.go:172] (0xc0006580a0) (3) Data frame sent\nI0626 01:13:22.953500 4164 log.go:172] (0xc0007ac000) Data frame received for 3\nI0626 01:13:22.953538 4164 log.go:172] (0xc0006580a0) (3) Data frame handling\nI0626 01:13:22.953778 4164 log.go:172] (0xc0007ac000) Data frame received for 5\nI0626 01:13:22.953794 4164 log.go:172] (0xc0005f0d20) (5) Data frame handling\nI0626 01:13:22.955608 4164 log.go:172] (0xc0007ac000) Data frame received for 1\nI0626 01:13:22.955646 4164 log.go:172] (0xc0007303c0) (1) Data frame handling\nI0626 01:13:22.955863 4164 log.go:172] (0xc0007303c0) (1) Data frame sent\nI0626 01:13:22.955903 4164 log.go:172] (0xc0007ac000) (0xc0007303c0) Stream removed, broadcasting: 1\nI0626 01:13:22.955932 4164 log.go:172] (0xc0007ac000) Go away received\nI0626 01:13:22.956418 4164 log.go:172] (0xc0007ac000) (0xc0007303c0) Stream removed, broadcasting: 1\nI0626 01:13:22.956455 4164 log.go:172] (0xc0007ac000) (0xc0006580a0) Stream removed, broadcasting: 3\nI0626 01:13:22.956478 4164 log.go:172] (0xc0007ac000) (0xc0005f0d20) Stream removed, broadcasting: 5\n" Jun 26 01:13:22.961: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 26 01:13:22.962: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 26 01:13:22.962: INFO: Waiting for statefulset status.replicas updated to 0 Jun 26 01:13:22.964: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Jun 26 01:13:32.992: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 26 01:13:32.992: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 26 01:13:32.992: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 26 01:13:33.009: INFO: POD NODE PHASE GRACE CONDITIONS Jun 26 01:13:33.009: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:12:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:12:48 +0000 UTC }] Jun 26 01:13:33.009: INFO: ss-1 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC }] Jun 26 01:13:33.009: INFO: ss-2 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC }] Jun 26 01:13:33.009: INFO: Jun 26 01:13:33.009: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 26 01:13:34.204: INFO: POD NODE PHASE GRACE CONDITIONS Jun 26 01:13:34.204: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:12:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:12:48 +0000 UTC }] Jun 26 01:13:34.204: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC }] Jun 26 01:13:34.204: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC }] Jun 26 01:13:34.204: INFO: Jun 26 01:13:34.204: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 26 01:13:35.228: INFO: POD NODE PHASE GRACE CONDITIONS Jun 26 01:13:35.228: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:12:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:12:48 +0000 UTC }] Jun 26 01:13:35.228: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC }] Jun 26 01:13:35.228: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC }] Jun 26 01:13:35.228: INFO: Jun 26 01:13:35.228: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 26 01:13:36.233: INFO: POD NODE PHASE GRACE CONDITIONS Jun 26 01:13:36.233: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:12:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:12:48 +0000 UTC }] Jun 26 01:13:36.233: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC }] Jun 26 01:13:36.233: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC }] Jun 26 01:13:36.233: INFO: Jun 26 01:13:36.233: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 26 01:13:37.238: INFO: POD NODE PHASE GRACE CONDITIONS Jun 26 01:13:37.238: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:12:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:12:48 +0000 UTC }] Jun 26 01:13:37.238: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC }] Jun 26 01:13:37.238: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC }] Jun 26 01:13:37.238: INFO: Jun 26 01:13:37.238: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 26 01:13:38.244: INFO: POD NODE PHASE GRACE CONDITIONS Jun 26 01:13:38.244: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:12:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:12:48 +0000 UTC }] Jun 26 01:13:38.244: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC }] Jun 26 01:13:38.244: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC }] Jun 26 01:13:38.244: INFO: Jun 26 01:13:38.244: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 26 01:13:39.248: INFO: POD NODE PHASE GRACE CONDITIONS Jun 26 01:13:39.248: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:12:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:12:48 +0000 UTC }] Jun 26 01:13:39.248: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC }] Jun 26 01:13:39.248: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC }] Jun 26 01:13:39.248: INFO: Jun 26 01:13:39.248: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 26 01:13:40.253: INFO: POD NODE PHASE GRACE CONDITIONS Jun 26 01:13:40.253: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:12:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:12:48 +0000 UTC }] Jun 26 01:13:40.254: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC }] Jun 26 01:13:40.254: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC }] Jun 26 01:13:40.254: INFO: Jun 26 01:13:40.254: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 26 01:13:41.259: INFO: POD NODE PHASE GRACE CONDITIONS Jun 26 01:13:41.259: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:12:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:12:48 +0000 UTC }] Jun 26 01:13:41.259: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC }] Jun 26 01:13:41.259: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC }] Jun 26 01:13:41.259: INFO: Jun 26 01:13:41.259: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 26 01:13:42.264: INFO: POD NODE PHASE GRACE CONDITIONS Jun 26 01:13:42.264: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:12:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:12:48 +0000 UTC }] Jun 26 01:13:42.264: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC }] Jun 26 01:13:42.264: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-26 01:13:11 +0000 UTC }] Jun 26 01:13:42.264: INFO: Jun 26 01:13:42.264: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6474 Jun 26 01:13:43.270: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 01:13:43.423: INFO: rc: 1 Jun 26 01:13:43.423: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Jun 26 01:13:53.424: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 01:13:53.528: INFO: rc: 1 Jun 26 01:13:53.528: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 26 01:14:03.529: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 01:14:03.642: INFO: rc: 1 Jun 26 01:14:03.642: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 26 01:14:13.642: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 01:14:13.755: INFO: rc: 1 Jun 26 01:14:13.755: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 26 01:14:23.756: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 01:14:23.862: INFO: rc: 1 Jun 26 01:14:23.862: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 26 01:14:33.863: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 01:14:33.967: INFO: rc: 1 Jun 26 01:14:33.967: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 26 01:14:43.967: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 01:14:44.075: INFO: rc: 1 Jun 26 01:14:44.075: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 26 01:14:54.075: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 01:14:54.193: INFO: rc: 1 Jun 26 01:14:54.193: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 26 01:15:04.193: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 01:15:04.299: INFO: rc: 1 Jun 26 01:15:04.299: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 26 01:15:14.299: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 01:15:14.410: INFO: rc: 1 Jun 26 01:15:14.410: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 26 01:15:24.411: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 01:15:24.505: INFO: rc: 1 Jun 26 01:15:24.505: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 26 01:15:34.506: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 01:15:34.631: INFO: rc: 1 Jun 26 01:15:34.631: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 26 01:15:44.631: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 01:15:44.733: INFO: rc: 1 Jun 26 01:15:44.733: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 26 01:15:54.733: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 01:15:54.841: INFO: rc: 1 Jun 26 01:15:54.841: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 26 01:16:04.842: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 01:16:04.939: INFO: rc: 1 Jun 26 01:16:04.939: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 26 01:16:14.939: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 01:16:15.049: INFO: rc: 1 Jun 26 01:16:15.049: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 26 01:16:25.050: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 01:16:25.168: INFO: rc: 1 Jun 26 01:16:25.168: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 26 01:16:35.168: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 01:16:35.285: INFO: rc: 1 Jun 26 01:16:35.285: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 26 01:16:45.286: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 01:16:45.402: INFO: rc: 1 Jun 26 01:16:45.402: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 26 01:16:55.402: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 01:16:55.518: INFO: rc: 1 Jun 26 01:16:55.518: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 26 01:17:05.518: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 01:17:05.634: INFO: rc: 1 Jun 26 01:17:05.634: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 26 01:17:15.634: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 01:17:15.742: INFO: rc: 1 Jun 26 01:17:15.742: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 26 01:17:25.742: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 01:17:25.851: INFO: rc: 1 Jun 26 01:17:25.851: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 26 01:17:35.851: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 01:17:35.955: INFO: rc: 1 Jun 26 01:17:35.955: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 26 01:17:45.956: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 01:17:46.062: INFO: rc: 1 Jun 26 01:17:46.062: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 26 01:17:56.062: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 01:17:56.177: INFO: rc: 1 Jun 26 01:17:56.177: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 26 01:18:06.177: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 01:18:06.281: INFO: rc: 1 Jun 26 01:18:06.281: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 26 01:18:16.281: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 01:18:16.391: INFO: rc: 1 Jun 26 01:18:16.391: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 26 01:18:26.392: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 01:18:26.501: INFO: rc: 1 Jun 26 01:18:26.501: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 26 01:18:36.501: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 01:18:36.614: INFO: rc: 1 Jun 26 01:18:36.614: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 26 01:18:46.614: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6474 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 26 01:18:46.724: INFO: rc: 1 Jun 26 01:18:46.724: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Jun 26 01:18:46.724: INFO: Scaling statefulset ss to 0 Jun 26 01:18:46.732: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jun 26 01:18:46.734: INFO: Deleting all statefulset in ns statefulset-6474 Jun 26 01:18:46.736: INFO: Scaling statefulset ss to 0 Jun 26 01:18:46.742: INFO: Waiting for statefulset status.replicas updated to 0 Jun 26 01:18:46.744: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 26 01:18:46.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6474" for this suite. • [SLOW TEST:358.776 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":294,"completed":292,"skipped":4791,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSJun 26 01:18:46.816: INFO: Running AfterSuite actions on all nodes Jun 26 01:18:46.816: INFO: Running AfterSuite actions on node 1 Jun 26 01:18:46.816: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":294,"completed":292,"skipped":4808,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} Summarizing 2 Failures: [Fail] [sig-auth] Certificates API [Privileged:ClusterAdmin] [It] should support CSR API operations [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/certificates.go:231 [Fail] [sig-auth] Certificates API [Privileged:ClusterAdmin] [It] should support building a client with a CSR [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/certificates.go:117 Ran 294 of 5102 Specs in 5979.542 seconds FAIL! -- 292 Passed | 2 Failed | 0 Pending | 4808 Skipped --- FAIL: TestE2E (5979.64s) FAIL