I0407 23:37:34.928281 7 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0407 23:37:34.928463 7 e2e.go:124] Starting e2e run "f57357f1-1185-4755-9a1b-488e554b7439" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1586302653 - Will randomize all specs Will run 275 of 4992 specs Apr 7 23:37:34.982: INFO: >>> kubeConfig: /root/.kube/config Apr 7 23:37:34.988: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 7 23:37:35.008: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 7 23:37:35.044: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 7 23:37:35.044: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 7 23:37:35.044: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 7 23:37:35.055: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 7 23:37:35.055: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 7 23:37:35.055: INFO: e2e test version: v1.19.0-alpha.0.779+84dc7046797aad Apr 7 23:37:35.057: INFO: kube-apiserver version: v1.17.0 Apr 7 23:37:35.057: INFO: >>> kubeConfig: /root/.kube/config Apr 7 23:37:35.062: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:37:35.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets Apr 7 23:37:35.138: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-33f0c388-b285-478e-9f0c-e9cc930ec088 STEP: Creating a pod to test consume secrets Apr 7 23:37:35.148: INFO: Waiting up to 5m0s for pod "pod-secrets-f192adda-b730-4b72-bda3-db97193ff4fa" in namespace "secrets-4918" to be "Succeeded or Failed" Apr 7 23:37:35.167: INFO: Pod "pod-secrets-f192adda-b730-4b72-bda3-db97193ff4fa": Phase="Pending", Reason="", readiness=false. Elapsed: 18.414383ms Apr 7 23:37:37.170: INFO: Pod "pod-secrets-f192adda-b730-4b72-bda3-db97193ff4fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021994103s Apr 7 23:37:39.174: INFO: Pod "pod-secrets-f192adda-b730-4b72-bda3-db97193ff4fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02554367s STEP: Saw pod success Apr 7 23:37:39.174: INFO: Pod "pod-secrets-f192adda-b730-4b72-bda3-db97193ff4fa" satisfied condition "Succeeded or Failed" Apr 7 23:37:39.178: INFO: Trying to get logs from node latest-worker pod pod-secrets-f192adda-b730-4b72-bda3-db97193ff4fa container secret-volume-test: STEP: delete the pod Apr 7 23:37:39.249: INFO: Waiting for pod pod-secrets-f192adda-b730-4b72-bda3-db97193ff4fa to disappear Apr 7 23:37:39.271: INFO: Pod pod-secrets-f192adda-b730-4b72-bda3-db97193ff4fa no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:37:39.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4918" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":1,"skipped":19,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:37:39.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting the proxy server Apr 7 23:37:39.382: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:37:39.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4595" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":275,"completed":2,"skipped":29,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:37:39.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-7883 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-7883 STEP: Creating statefulset with conflicting port in namespace statefulset-7883 STEP: Waiting until pod test-pod will start running in namespace statefulset-7883 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7883 Apr 7 23:37:45.593: INFO: Observed stateful pod in namespace: statefulset-7883, name: ss-0, uid: b72f256c-ef91-4c1b-875b-cd3abe7d08c9, status phase: Pending. Waiting for statefulset controller to delete. Apr 7 23:37:45.652: INFO: Observed stateful pod in namespace: statefulset-7883, name: ss-0, uid: b72f256c-ef91-4c1b-875b-cd3abe7d08c9, status phase: Failed. Waiting for statefulset controller to delete. Apr 7 23:37:45.661: INFO: Observed stateful pod in namespace: statefulset-7883, name: ss-0, uid: b72f256c-ef91-4c1b-875b-cd3abe7d08c9, status phase: Failed. Waiting for statefulset controller to delete. Apr 7 23:37:45.668: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7883 STEP: Removing pod with conflicting port in namespace statefulset-7883 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7883 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 7 23:37:49.790: INFO: Deleting all statefulset in ns statefulset-7883 Apr 7 23:37:49.792: INFO: Scaling statefulset ss to 0 Apr 7 23:37:59.808: INFO: Waiting for statefulset status.replicas updated to 0 Apr 7 23:37:59.811: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:37:59.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7883" for this suite. • [SLOW TEST:20.369 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":3,"skipped":83,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:37:59.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 7 23:37:59.899: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8d67dc57-44f8-44fa-8d8c-af06e2212eaa" in namespace "downward-api-1799" to be "Succeeded or Failed" Apr 7 23:37:59.902: INFO: Pod "downwardapi-volume-8d67dc57-44f8-44fa-8d8c-af06e2212eaa": Phase="Pending", Reason="", readiness=false. Elapsed: 3.710605ms Apr 7 23:38:01.907: INFO: Pod "downwardapi-volume-8d67dc57-44f8-44fa-8d8c-af06e2212eaa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008012509s Apr 7 23:38:03.912: INFO: Pod "downwardapi-volume-8d67dc57-44f8-44fa-8d8c-af06e2212eaa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013411662s STEP: Saw pod success Apr 7 23:38:03.912: INFO: Pod "downwardapi-volume-8d67dc57-44f8-44fa-8d8c-af06e2212eaa" satisfied condition "Succeeded or Failed" Apr 7 23:38:03.915: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-8d67dc57-44f8-44fa-8d8c-af06e2212eaa container client-container: STEP: delete the pod Apr 7 23:38:03.927: INFO: Waiting for pod downwardapi-volume-8d67dc57-44f8-44fa-8d8c-af06e2212eaa to disappear Apr 7 23:38:03.932: INFO: Pod downwardapi-volume-8d67dc57-44f8-44fa-8d8c-af06e2212eaa no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:38:03.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1799" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":4,"skipped":92,"failed":0} ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:38:03.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 7 23:38:03.994: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Apr 7 23:38:05.931: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-777 create -f -' Apr 7 23:38:08.395: INFO: stderr: "" Apr 7 23:38:08.395: INFO: stdout: "e2e-test-crd-publish-openapi-8741-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 7 23:38:08.395: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-777 delete e2e-test-crd-publish-openapi-8741-crds test-foo' Apr 7 23:38:08.490: INFO: stderr: "" Apr 7 23:38:08.490: INFO: stdout: "e2e-test-crd-publish-openapi-8741-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Apr 7 23:38:08.491: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-777 apply -f -' Apr 7 23:38:08.740: INFO: stderr: "" Apr 7 23:38:08.740: INFO: stdout: "e2e-test-crd-publish-openapi-8741-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 7 23:38:08.740: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-777 delete e2e-test-crd-publish-openapi-8741-crds test-foo' Apr 7 23:38:08.852: INFO: stderr: "" Apr 7 23:38:08.852: INFO: stdout: "e2e-test-crd-publish-openapi-8741-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Apr 7 23:38:08.852: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-777 create -f -' Apr 7 23:38:09.083: INFO: rc: 1 Apr 7 23:38:09.083: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-777 apply -f -' Apr 7 23:38:09.298: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Apr 7 23:38:09.299: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-777 create -f -' Apr 7 23:38:09.527: INFO: rc: 1 Apr 7 23:38:09.528: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-777 apply -f -' Apr 7 23:38:09.751: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Apr 7 23:38:09.751: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8741-crds' Apr 7 23:38:09.979: INFO: stderr: "" Apr 7 23:38:09.979: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8741-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Apr 7 23:38:09.980: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8741-crds.metadata' Apr 7 23:38:10.228: INFO: stderr: "" Apr 7 23:38:10.228: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8741-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Apr 7 23:38:10.229: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8741-crds.spec' Apr 7 23:38:10.453: INFO: stderr: "" Apr 7 23:38:10.454: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8741-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Apr 7 23:38:10.454: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8741-crds.spec.bars' Apr 7 23:38:10.697: INFO: stderr: "" Apr 7 23:38:10.697: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8741-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Apr 7 23:38:10.697: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8741-crds.spec.bars2' Apr 7 23:38:10.933: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:38:13.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-777" for this suite. • [SLOW TEST:9.915 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":5,"skipped":92,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:38:13.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 7 23:38:13.919: INFO: Waiting up to 5m0s for pod "pod-08ae2ef5-f598-4273-88db-d8302922d7d9" in namespace "emptydir-6738" to be "Succeeded or Failed" Apr 7 23:38:13.935: INFO: Pod "pod-08ae2ef5-f598-4273-88db-d8302922d7d9": Phase="Pending", Reason="", readiness=false. Elapsed: 15.739912ms Apr 7 23:38:15.956: INFO: Pod "pod-08ae2ef5-f598-4273-88db-d8302922d7d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036133617s Apr 7 23:38:17.959: INFO: Pod "pod-08ae2ef5-f598-4273-88db-d8302922d7d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039763646s STEP: Saw pod success Apr 7 23:38:17.959: INFO: Pod "pod-08ae2ef5-f598-4273-88db-d8302922d7d9" satisfied condition "Succeeded or Failed" Apr 7 23:38:17.961: INFO: Trying to get logs from node latest-worker pod pod-08ae2ef5-f598-4273-88db-d8302922d7d9 container test-container: STEP: delete the pod Apr 7 23:38:18.012: INFO: Waiting for pod pod-08ae2ef5-f598-4273-88db-d8302922d7d9 to disappear Apr 7 23:38:18.026: INFO: Pod pod-08ae2ef5-f598-4273-88db-d8302922d7d9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:38:18.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6738" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":6,"skipped":94,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:38:18.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-80f29960-ea33-4436-8a4e-aa8984e0fdae STEP: Creating a pod to test consume secrets Apr 7 23:38:18.152: INFO: Waiting up to 5m0s for pod "pod-secrets-71735c5d-830e-448d-9e9e-da4244a62d77" in namespace "secrets-4197" to be "Succeeded or Failed" Apr 7 23:38:18.201: INFO: Pod "pod-secrets-71735c5d-830e-448d-9e9e-da4244a62d77": Phase="Pending", Reason="", readiness=false. Elapsed: 49.20555ms Apr 7 23:38:20.206: INFO: Pod "pod-secrets-71735c5d-830e-448d-9e9e-da4244a62d77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053790107s Apr 7 23:38:22.210: INFO: Pod "pod-secrets-71735c5d-830e-448d-9e9e-da4244a62d77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057476617s STEP: Saw pod success Apr 7 23:38:22.210: INFO: Pod "pod-secrets-71735c5d-830e-448d-9e9e-da4244a62d77" satisfied condition "Succeeded or Failed" Apr 7 23:38:22.213: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-71735c5d-830e-448d-9e9e-da4244a62d77 container secret-volume-test: STEP: delete the pod Apr 7 23:38:22.287: INFO: Waiting for pod pod-secrets-71735c5d-830e-448d-9e9e-da4244a62d77 to disappear Apr 7 23:38:22.292: INFO: Pod pod-secrets-71735c5d-830e-448d-9e9e-da4244a62d77 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:38:22.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4197" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":7,"skipped":139,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:38:22.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0407 23:38:33.854398 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 7 23:38:33.854: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:38:33.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5233" for this suite. • [SLOW TEST:11.559 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":8,"skipped":177,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:38:33.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating api versions Apr 7 23:38:33.905: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config api-versions' Apr 7 23:38:34.181: INFO: stderr: "" Apr 7 23:38:34.181: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:38:34.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4056" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":275,"completed":9,"skipped":227,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:38:34.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 7 23:38:38.785: INFO: Successfully updated pod "annotationupdate3c9cb990-dd41-4f25-b94b-dc5ab40c9876" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:38:40.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5099" for this suite. • [SLOW TEST:6.638 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":10,"skipped":303,"failed":0} S ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:38:40.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 7 23:38:41.080: INFO: Creating deployment "webserver-deployment" Apr 7 23:38:41.087: INFO: Waiting for observed generation 1 Apr 7 23:38:43.110: INFO: Waiting for all required pods to come up Apr 7 23:38:43.114: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 7 23:38:53.124: INFO: Waiting for deployment "webserver-deployment" to complete Apr 7 23:38:53.129: INFO: Updating deployment "webserver-deployment" with a non-existent image Apr 7 23:38:53.135: INFO: Updating deployment webserver-deployment Apr 7 23:38:53.135: INFO: Waiting for observed generation 2 Apr 7 23:38:55.160: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 7 23:38:55.163: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 7 23:38:55.166: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 7 23:38:55.173: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 7 23:38:55.173: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 7 23:38:55.176: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 7 23:38:55.181: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Apr 7 23:38:55.181: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Apr 7 23:38:55.186: INFO: Updating deployment webserver-deployment Apr 7 23:38:55.186: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Apr 7 23:38:55.304: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 7 23:38:55.364: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 7 23:38:55.780: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-1081 /apis/apps/v1/namespaces/deployment-1081/deployments/webserver-deployment 1805e93a-3ea8-4338-bab0-93a1414c0104 6264971 3 2020-04-07 23:38:41 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0030bfb08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-04-07 23:38:53 +0000 UTC,LastTransitionTime:2020-04-07 23:38:41 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-07 23:38:55 +0000 UTC,LastTransitionTime:2020-04-07 23:38:55 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Apr 7 23:38:55.942: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-1081 /apis/apps/v1/namespaces/deployment-1081/replicasets/webserver-deployment-c7997dcc8 6a4113f1-659c-42cb-adcb-7c447c1ebfd8 6265033 3 2020-04-07 23:38:53 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 1805e93a-3ea8-4338-bab0-93a1414c0104 0xc002a84057 0xc002a84058}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002a840c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 7 23:38:55.942: INFO: All old ReplicaSets of Deployment "webserver-deployment": Apr 7 23:38:55.942: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-1081 /apis/apps/v1/namespaces/deployment-1081/replicasets/webserver-deployment-595b5b9587 361a7139-6fb8-4f42-a918-60d06a8b7cff 6265013 3 2020-04-07 23:38:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 1805e93a-3ea8-4338-bab0-93a1414c0104 0xc0030bff97 0xc0030bff98}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0030bfff8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Apr 7 23:38:56.020: INFO: Pod "webserver-deployment-595b5b9587-244pv" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-244pv webserver-deployment-595b5b9587- deployment-1081 /api/v1/namespaces/deployment-1081/pods/webserver-deployment-595b5b9587-244pv 0d23e7ea-f150-4954-b46c-1c848f670816 6265019 0 2020-04-07 23:38:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 361a7139-6fb8-4f42-a918-60d06a8b7cff 0xc002bb31e7 0xc002bb31e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xtkt2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xtkt2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xtkt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 7 23:38:56.020: INFO: Pod "webserver-deployment-595b5b9587-2j4sw" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2j4sw webserver-deployment-595b5b9587- deployment-1081 /api/v1/namespaces/deployment-1081/pods/webserver-deployment-595b5b9587-2j4sw 7fcc7311-58d9-4f26-8698-393b7d118b26 6264814 0 2020-04-07 23:38:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 361a7139-6fb8-4f42-a918-60d06a8b7cff 0xc002bb3307 0xc002bb3308}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xtkt2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xtkt2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xtkt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.52,StartTime:2020-04-07 23:38:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-07 23:38:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ea08928753b4a32ce2b7c85ee06af6e0621543b4e8a6cff63e47696257b771b3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.52,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 7 23:38:56.020: INFO: Pod "webserver-deployment-595b5b9587-4dgnd" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4dgnd webserver-deployment-595b5b9587- deployment-1081 /api/v1/namespaces/deployment-1081/pods/webserver-deployment-595b5b9587-4dgnd 66c8f622-4ef8-49b1-8987-8022a9d9d8bf 6265017 0 2020-04-07 23:38:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 361a7139-6fb8-4f42-a918-60d06a8b7cff 0xc002bb3487 0xc002bb3488}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xtkt2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xtkt2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xtkt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 7 23:38:56.020: INFO: Pod "webserver-deployment-595b5b9587-74j5w" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-74j5w webserver-deployment-595b5b9587- deployment-1081 /api/v1/namespaces/deployment-1081/pods/webserver-deployment-595b5b9587-74j5w d73c6100-9180-430d-968b-769dde0b7c84 6264990 0 2020-04-07 23:38:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 361a7139-6fb8-4f42-a918-60d06a8b7cff 0xc002bb35a7 0xc002bb35a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xtkt2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xtkt2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xtkt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 7 23:38:56.021: INFO: Pod "webserver-deployment-595b5b9587-d4vh7" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-d4vh7 webserver-deployment-595b5b9587- deployment-1081 /api/v1/namespaces/deployment-1081/pods/webserver-deployment-595b5b9587-d4vh7 ac9741c1-924b-4293-b5bf-d013b73c8c15 6265021 0 2020-04-07 23:38:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 361a7139-6fb8-4f42-a918-60d06a8b7cff 0xc002bb36c7 0xc002bb36c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xtkt2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xtkt2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xtkt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-07 23:38:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 7 23:38:56.021: INFO: Pod "webserver-deployment-595b5b9587-d68mp" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-d68mp webserver-deployment-595b5b9587- deployment-1081 /api/v1/namespaces/deployment-1081/pods/webserver-deployment-595b5b9587-d68mp effc22f4-c8bb-4d6f-afb0-ffa748259f3c 6265041 0 2020-04-07 23:38:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 361a7139-6fb8-4f42-a918-60d06a8b7cff 0xc002bb3827 0xc002bb3828}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xtkt2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xtkt2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xtkt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-07 23:38:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 7 23:38:56.021: INFO: Pod "webserver-deployment-595b5b9587-dcfx6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dcfx6 webserver-deployment-595b5b9587- deployment-1081 /api/v1/namespaces/deployment-1081/pods/webserver-deployment-595b5b9587-dcfx6 6196fe14-2c02-4953-9175-26ba71a6b55f 6264988 0 2020-04-07 23:38:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 361a7139-6fb8-4f42-a918-60d06a8b7cff 0xc002bb3987 0xc002bb3988}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xtkt2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xtkt2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xtkt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 7 23:38:56.022: INFO: Pod "webserver-deployment-595b5b9587-ftq69" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ftq69 webserver-deployment-595b5b9587- deployment-1081 /api/v1/namespaces/deployment-1081/pods/webserver-deployment-595b5b9587-ftq69 5d25991b-c911-44e9-bd6b-08e89f7f331c 6264875 0 2020-04-07 23:38:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 361a7139-6fb8-4f42-a918-60d06a8b7cff 0xc002bb3aa7 0xc002bb3aa8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xtkt2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xtkt2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xtkt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.90,StartTime:2020-04-07 23:38:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-07 23:38:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2a7bc3d84b7070a8f552604530b7980a3ac2b54ead7cde933dc621a20e4e2362,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.90,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 7 23:38:56.022: INFO: Pod "webserver-deployment-595b5b9587-fwd4w" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fwd4w webserver-deployment-595b5b9587- deployment-1081 /api/v1/namespaces/deployment-1081/pods/webserver-deployment-595b5b9587-fwd4w 353ebf23-0f89-4e47-b507-98647bb5a7b7 6264864 0 2020-04-07 23:38:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 361a7139-6fb8-4f42-a918-60d06a8b7cff 0xc002bb3c27 0xc002bb3c28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xtkt2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xtkt2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xtkt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.55,StartTime:2020-04-07 23:38:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-07 23:38:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6a9f312bc6bcb0e94f6f75c1cd37f0a94a31e72b7f18b445006ab3b2e010f645,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.55,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 7 23:38:56.022: INFO: Pod "webserver-deployment-595b5b9587-h9mrd" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-h9mrd webserver-deployment-595b5b9587- deployment-1081 /api/v1/namespaces/deployment-1081/pods/webserver-deployment-595b5b9587-h9mrd f5fc7ed4-9586-4145-a4fb-7634148b49bd 6264876 0 2020-04-07 23:38:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 361a7139-6fb8-4f42-a918-60d06a8b7cff 0xc002bb3da7 0xc002bb3da8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xtkt2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xtkt2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xtkt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.56,StartTime:2020-04-07 23:38:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-07 23:38:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://341bf800e05adaea13e6d92e6af697cbdb0b0083ebdb3ac3feb7f3baddf7bdfd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.56,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 7 23:38:56.022: INFO: Pod "webserver-deployment-595b5b9587-kqz5t" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-kqz5t webserver-deployment-595b5b9587- deployment-1081 /api/v1/namespaces/deployment-1081/pods/webserver-deployment-595b5b9587-kqz5t 0e518cc9-f953-42dc-801b-0fc2ad049d85 6265016 0 2020-04-07 23:38:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 361a7139-6fb8-4f42-a918-60d06a8b7cff 0xc002bb3f27 0xc002bb3f28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xtkt2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xtkt2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xtkt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 7 23:38:56.022: INFO: Pod "webserver-deployment-595b5b9587-mbg8l" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mbg8l webserver-deployment-595b5b9587- deployment-1081 /api/v1/namespaces/deployment-1081/pods/webserver-deployment-595b5b9587-mbg8l f7815334-baa1-436d-8b7d-298321fee0df 6264981 0 2020-04-07 23:38:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 361a7139-6fb8-4f42-a918-60d06a8b7cff 0xc0029da047 0xc0029da048}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xtkt2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xtkt2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xtkt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 7 23:38:56.022: INFO: Pod "webserver-deployment-595b5b9587-n8788" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-n8788 webserver-deployment-595b5b9587- deployment-1081 /api/v1/namespaces/deployment-1081/pods/webserver-deployment-595b5b9587-n8788 154a8f8a-c07a-45a5-9606-1bb80e011dd9 6264833 0 2020-04-07 23:38:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 361a7139-6fb8-4f42-a918-60d06a8b7cff 0xc0029da177 0xc0029da178}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xtkt2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xtkt2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xtkt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.53,StartTime:2020-04-07 23:38:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-07 23:38:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://46bc94e03b50c9c683e46d5388520d75b885963a46c7bb229a3d7133a91f677a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.53,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 7 23:38:56.023: INFO: Pod "webserver-deployment-595b5b9587-nmm9b" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-nmm9b webserver-deployment-595b5b9587- deployment-1081 /api/v1/namespaces/deployment-1081/pods/webserver-deployment-595b5b9587-nmm9b f8cb77a5-6038-4a79-be28-16edbe6b73c6 6264862 0 2020-04-07 23:38:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 361a7139-6fb8-4f42-a918-60d06a8b7cff 0xc0029da2f7 0xc0029da2f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xtkt2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xtkt2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xtkt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.89,StartTime:2020-04-07 23:38:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-07 23:38:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2f6ef15fe4fe3bb72f879180687b4ecfb29c26d48fc5b408b32ea3f0b0db4ab9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.89,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 7 23:38:56.023: INFO: Pod "webserver-deployment-595b5b9587-qmnqr" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qmnqr webserver-deployment-595b5b9587- deployment-1081 /api/v1/namespaces/deployment-1081/pods/webserver-deployment-595b5b9587-qmnqr 775e57a8-e84b-49c4-9b4d-a5dda3d6e3de 6265018 0 2020-04-07 23:38:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 361a7139-6fb8-4f42-a918-60d06a8b7cff 0xc0029da477 0xc0029da478}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xtkt2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xtkt2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xtkt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 7 23:38:56.023: INFO: Pod "webserver-deployment-595b5b9587-r496b" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-r496b webserver-deployment-595b5b9587- deployment-1081 /api/v1/namespaces/deployment-1081/pods/webserver-deployment-595b5b9587-r496b e718925f-87e4-4e00-849f-683f107871a9 6264995 0 2020-04-07 23:38:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 361a7139-6fb8-4f42-a918-60d06a8b7cff 0xc0029da7a7 0xc0029da7a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xtkt2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xtkt2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xtkt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 7 23:38:56.023: INFO: Pod "webserver-deployment-595b5b9587-rs7j6" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rs7j6 webserver-deployment-595b5b9587- deployment-1081 /api/v1/namespaces/deployment-1081/pods/webserver-deployment-595b5b9587-rs7j6 ae762d05-9568-4639-a4db-ccac269e3788 6264838 0 2020-04-07 23:38:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 361a7139-6fb8-4f42-a918-60d06a8b7cff 0xc0029da977 0xc0029da978}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xtkt2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xtkt2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xtkt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.54,StartTime:2020-04-07 23:38:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-07 23:38:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://daf56ea88ec5f6f8da5e3c042d851d3fbe6bf2406ebc52c4f914a7dd8cf37e06,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.54,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 7 23:38:56.023: INFO: Pod "webserver-deployment-595b5b9587-svddx" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-svddx webserver-deployment-595b5b9587- deployment-1081 /api/v1/namespaces/deployment-1081/pods/webserver-deployment-595b5b9587-svddx 240221fb-1858-4381-bee9-34903feb42e1 6265015 0 2020-04-07 23:38:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 361a7139-6fb8-4f42-a918-60d06a8b7cff 0xc0029dacd7 0xc0029dacd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xtkt2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xtkt2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xtkt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 7 23:38:56.024: INFO: Pod "webserver-deployment-595b5b9587-szzwx" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-szzwx webserver-deployment-595b5b9587- deployment-1081 /api/v1/namespaces/deployment-1081/pods/webserver-deployment-595b5b9587-szzwx e6c1be78-4ef6-4559-853a-065282b06ef5 6264996 0 2020-04-07 23:38:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 361a7139-6fb8-4f42-a918-60d06a8b7cff 0xc0029dadf7 0xc0029dadf8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xtkt2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xtkt2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xtkt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 7 23:38:56.024: INFO: Pod "webserver-deployment-595b5b9587-wrvz9" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wrvz9 webserver-deployment-595b5b9587- deployment-1081 /api/v1/namespaces/deployment-1081/pods/webserver-deployment-595b5b9587-wrvz9 174b6240-eadf-4007-90e9-d960578542e8 6264887 0 2020-04-07 23:38:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 361a7139-6fb8-4f42-a918-60d06a8b7cff 0xc0029daf17 0xc0029daf18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xtkt2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xtkt2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xtkt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.91,StartTime:2020-04-07 23:38:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-07 23:38:50 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://018dc81f00b1582f93a8a4330c9a8ab3f2a0d943aa3b5f993855db89a888f044,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.91,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 7 23:38:56.024: INFO: Pod "webserver-deployment-c7997dcc8-4jpfj" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4jpfj webserver-deployment-c7997dcc8- deployment-1081 /api/v1/namespaces/deployment-1081/pods/webserver-deployment-c7997dcc8-4jpfj 692899bb-1a95-4c69-8ff9-9f83b5781149 6264953 0 2020-04-07 23:38:53 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6a4113f1-659c-42cb-adcb-7c447c1ebfd8 0xc0029db267 0xc0029db268}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xtkt2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xtkt2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xtkt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-07 23:38:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 7 23:38:56.024: INFO: Pod "webserver-deployment-c7997dcc8-77x8b" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-77x8b webserver-deployment-c7997dcc8- deployment-1081 /api/v1/namespaces/deployment-1081/pods/webserver-deployment-c7997dcc8-77x8b 53e83133-1f65-407e-bba8-73a744b4e0be 6264926 0 2020-04-07 23:38:53 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6a4113f1-659c-42cb-adcb-7c447c1ebfd8 0xc0029db3f7 0xc0029db3f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xtkt2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xtkt2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xtkt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-07 23:38:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 7 23:38:56.024: INFO: Pod "webserver-deployment-c7997dcc8-cfq74" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-cfq74 webserver-deployment-c7997dcc8- deployment-1081 /api/v1/namespaces/deployment-1081/pods/webserver-deployment-c7997dcc8-cfq74 0e3274ea-2fa0-4da7-b91a-6cdbe25d8e64 6265009 0 2020-04-07 23:38:55 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6a4113f1-659c-42cb-adcb-7c447c1ebfd8 0xc0029db577 0xc0029db578}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xtkt2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xtkt2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xtkt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 7 23:38:56.025: INFO: Pod "webserver-deployment-c7997dcc8-db6xg" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-db6xg webserver-deployment-c7997dcc8- deployment-1081 /api/v1/namespaces/deployment-1081/pods/webserver-deployment-c7997dcc8-db6xg 9ab17c19-52ce-4a6e-bcac-64b03a4980ca 6265032 0 2020-04-07 23:38:55 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6a4113f1-659c-42cb-adcb-7c447c1ebfd8 0xc0029db6a7 0xc0029db6a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xtkt2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xtkt2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xtkt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 7 23:38:56.025: INFO: Pod "webserver-deployment-c7997dcc8-fcnlp" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fcnlp webserver-deployment-c7997dcc8- deployment-1081 /api/v1/namespaces/deployment-1081/pods/webserver-deployment-c7997dcc8-fcnlp 62db640d-a150-4a3b-b73c-7ee1b8b68992 6265008 0 2020-04-07 23:38:55 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6a4113f1-659c-42cb-adcb-7c447c1ebfd8 0xc0029db7d7 0xc0029db7d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xtkt2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xtkt2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xtkt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 7 23:38:56.025: INFO: Pod "webserver-deployment-c7997dcc8-hmw6h" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hmw6h webserver-deployment-c7997dcc8- deployment-1081 /api/v1/namespaces/deployment-1081/pods/webserver-deployment-c7997dcc8-hmw6h 8015b1d7-925a-40de-bb7e-280d229f28ff 6264935 0 2020-04-07 23:38:53 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6a4113f1-659c-42cb-adcb-7c447c1ebfd8 0xc0029db907 0xc0029db908}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xtkt2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xtkt2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xtkt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-07 23:38:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 7 23:38:56.025: INFO: Pod "webserver-deployment-c7997dcc8-k5zg7" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-k5zg7 webserver-deployment-c7997dcc8- deployment-1081 /api/v1/namespaces/deployment-1081/pods/webserver-deployment-c7997dcc8-k5zg7 06e51a1b-1ba5-4d03-af48-a0cf3ccec708 6264989 0 2020-04-07 23:38:55 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6a4113f1-659c-42cb-adcb-7c447c1ebfd8 0xc0029dba87 0xc0029dba88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xtkt2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xtkt2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xtkt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 7 23:38:56.025: INFO: Pod "webserver-deployment-c7997dcc8-k72gp" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-k72gp webserver-deployment-c7997dcc8- deployment-1081 /api/v1/namespaces/deployment-1081/pods/webserver-deployment-c7997dcc8-k72gp 86fd706b-d052-49c9-8944-3409523ec2f1 6265014 0 2020-04-07 23:38:55 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6a4113f1-659c-42cb-adcb-7c447c1ebfd8 0xc0029dbbb7 0xc0029dbbb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xtkt2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xtkt2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xtkt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 7 23:38:56.025: INFO: Pod "webserver-deployment-c7997dcc8-lmr8q" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lmr8q webserver-deployment-c7997dcc8- deployment-1081 /api/v1/namespaces/deployment-1081/pods/webserver-deployment-c7997dcc8-lmr8q 4cb3f4ec-53c2-4487-a572-2856ebf4d0a3 6265010 0 2020-04-07 23:38:55 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6a4113f1-659c-42cb-adcb-7c447c1ebfd8 0xc0029dbef7 0xc0029dbef8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xtkt2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xtkt2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xtkt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 7 23:38:56.025: INFO: Pod "webserver-deployment-c7997dcc8-mpk76" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-mpk76 webserver-deployment-c7997dcc8- deployment-1081 /api/v1/namespaces/deployment-1081/pods/webserver-deployment-c7997dcc8-mpk76 3d39356a-2343-4f66-bc1b-7ca556fd4ccf 6264991 0 2020-04-07 23:38:55 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6a4113f1-659c-42cb-adcb-7c447c1ebfd8 0xc002948047 0xc002948048}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xtkt2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xtkt2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xtkt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 7 23:38:56.026: INFO: Pod "webserver-deployment-c7997dcc8-sxdcm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-sxdcm webserver-deployment-c7997dcc8- deployment-1081 /api/v1/namespaces/deployment-1081/pods/webserver-deployment-c7997dcc8-sxdcm e6a530af-b4b8-400a-85e1-078638f1b802 6265040 0 2020-04-07 23:38:55 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6a4113f1-659c-42cb-adcb-7c447c1ebfd8 0xc002948287 0xc002948288}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xtkt2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xtkt2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xtkt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-07 23:38:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 7 23:38:56.026: INFO: Pod "webserver-deployment-c7997dcc8-szl9t" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-szl9t webserver-deployment-c7997dcc8- deployment-1081 /api/v1/namespaces/deployment-1081/pods/webserver-deployment-c7997dcc8-szl9t ceb0c7c2-801f-448b-af4e-d322aa92f96b 6264954 0 2020-04-07 23:38:53 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6a4113f1-659c-42cb-adcb-7c447c1ebfd8 0xc002948587 0xc002948588}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xtkt2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xtkt2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xtkt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-07 23:38:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 7 23:38:56.026: INFO: Pod "webserver-deployment-c7997dcc8-vkm9q" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vkm9q webserver-deployment-c7997dcc8- deployment-1081 /api/v1/namespaces/deployment-1081/pods/webserver-deployment-c7997dcc8-vkm9q 0b1a6f7b-a3f0-4828-bd10-233cf1647061 6264923 0 2020-04-07 23:38:53 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6a4113f1-659c-42cb-adcb-7c447c1ebfd8 0xc0029487c7 0xc0029487c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xtkt2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xtkt2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xtkt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:38:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-07 23:38:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:38:56.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1081" for this suite. • [SLOW TEST:15.404 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":11,"skipped":304,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:38:56.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-b7afae4a-2096-404a-a8de-00a93db6f10b STEP: Creating a pod to test consume configMaps Apr 7 23:38:56.507: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-258133de-0f9b-4360-bff6-d68fc6cf34df" in namespace "projected-1927" to be "Succeeded or Failed" Apr 7 23:38:56.604: INFO: Pod "pod-projected-configmaps-258133de-0f9b-4360-bff6-d68fc6cf34df": Phase="Pending", Reason="", readiness=false. Elapsed: 96.430559ms Apr 7 23:38:59.023: INFO: Pod "pod-projected-configmaps-258133de-0f9b-4360-bff6-d68fc6cf34df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.515214894s Apr 7 23:39:01.334: INFO: Pod "pod-projected-configmaps-258133de-0f9b-4360-bff6-d68fc6cf34df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.826856734s Apr 7 23:39:03.939: INFO: Pod "pod-projected-configmaps-258133de-0f9b-4360-bff6-d68fc6cf34df": Phase="Pending", Reason="", readiness=false. Elapsed: 7.431752926s Apr 7 23:39:05.993: INFO: Pod "pod-projected-configmaps-258133de-0f9b-4360-bff6-d68fc6cf34df": Phase="Pending", Reason="", readiness=false. Elapsed: 9.485016135s Apr 7 23:39:08.012: INFO: Pod "pod-projected-configmaps-258133de-0f9b-4360-bff6-d68fc6cf34df": Phase="Pending", Reason="", readiness=false. Elapsed: 11.503996652s Apr 7 23:39:10.233: INFO: Pod "pod-projected-configmaps-258133de-0f9b-4360-bff6-d68fc6cf34df": Phase="Pending", Reason="", readiness=false. Elapsed: 13.725221454s Apr 7 23:39:12.422: INFO: Pod "pod-projected-configmaps-258133de-0f9b-4360-bff6-d68fc6cf34df": Phase="Running", Reason="", readiness=true. Elapsed: 15.91418591s Apr 7 23:39:14.503: INFO: Pod "pod-projected-configmaps-258133de-0f9b-4360-bff6-d68fc6cf34df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.995247834s STEP: Saw pod success Apr 7 23:39:14.503: INFO: Pod "pod-projected-configmaps-258133de-0f9b-4360-bff6-d68fc6cf34df" satisfied condition "Succeeded or Failed" Apr 7 23:39:14.539: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-258133de-0f9b-4360-bff6-d68fc6cf34df container projected-configmap-volume-test: STEP: delete the pod Apr 7 23:39:14.775: INFO: Waiting for pod pod-projected-configmaps-258133de-0f9b-4360-bff6-d68fc6cf34df to disappear Apr 7 23:39:14.844: INFO: Pod pod-projected-configmaps-258133de-0f9b-4360-bff6-d68fc6cf34df no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:39:14.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1927" for this suite. • [SLOW TEST:18.850 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":12,"skipped":317,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:39:15.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 7 23:39:15.457: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:39:16.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7882" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":275,"completed":13,"skipped":321,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:39:16.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:39:29.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3562" for this suite. • [SLOW TEST:13.246 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":14,"skipped":334,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:39:29.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3450.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3450.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3450.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3450.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 7 23:39:36.067: INFO: DNS probes using dns-test-dc6923ed-d46e-4f55-9d08-75209597ba6d succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3450.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3450.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3450.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3450.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 7 23:39:42.182: INFO: File wheezy_udp@dns-test-service-3.dns-3450.svc.cluster.local from pod dns-3450/dns-test-c9dbc46c-f541-4973-875e-5d42391540c7 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 7 23:39:42.185: INFO: File jessie_udp@dns-test-service-3.dns-3450.svc.cluster.local from pod dns-3450/dns-test-c9dbc46c-f541-4973-875e-5d42391540c7 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 7 23:39:42.185: INFO: Lookups using dns-3450/dns-test-c9dbc46c-f541-4973-875e-5d42391540c7 failed for: [wheezy_udp@dns-test-service-3.dns-3450.svc.cluster.local jessie_udp@dns-test-service-3.dns-3450.svc.cluster.local] Apr 7 23:39:47.190: INFO: File wheezy_udp@dns-test-service-3.dns-3450.svc.cluster.local from pod dns-3450/dns-test-c9dbc46c-f541-4973-875e-5d42391540c7 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 7 23:39:47.194: INFO: File jessie_udp@dns-test-service-3.dns-3450.svc.cluster.local from pod dns-3450/dns-test-c9dbc46c-f541-4973-875e-5d42391540c7 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 7 23:39:47.194: INFO: Lookups using dns-3450/dns-test-c9dbc46c-f541-4973-875e-5d42391540c7 failed for: [wheezy_udp@dns-test-service-3.dns-3450.svc.cluster.local jessie_udp@dns-test-service-3.dns-3450.svc.cluster.local] Apr 7 23:39:52.191: INFO: File wheezy_udp@dns-test-service-3.dns-3450.svc.cluster.local from pod dns-3450/dns-test-c9dbc46c-f541-4973-875e-5d42391540c7 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 7 23:39:52.195: INFO: File jessie_udp@dns-test-service-3.dns-3450.svc.cluster.local from pod dns-3450/dns-test-c9dbc46c-f541-4973-875e-5d42391540c7 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 7 23:39:52.195: INFO: Lookups using dns-3450/dns-test-c9dbc46c-f541-4973-875e-5d42391540c7 failed for: [wheezy_udp@dns-test-service-3.dns-3450.svc.cluster.local jessie_udp@dns-test-service-3.dns-3450.svc.cluster.local] Apr 7 23:39:57.190: INFO: File wheezy_udp@dns-test-service-3.dns-3450.svc.cluster.local from pod dns-3450/dns-test-c9dbc46c-f541-4973-875e-5d42391540c7 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 7 23:39:57.193: INFO: File jessie_udp@dns-test-service-3.dns-3450.svc.cluster.local from pod dns-3450/dns-test-c9dbc46c-f541-4973-875e-5d42391540c7 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 7 23:39:57.193: INFO: Lookups using dns-3450/dns-test-c9dbc46c-f541-4973-875e-5d42391540c7 failed for: [wheezy_udp@dns-test-service-3.dns-3450.svc.cluster.local jessie_udp@dns-test-service-3.dns-3450.svc.cluster.local] Apr 7 23:40:02.190: INFO: File wheezy_udp@dns-test-service-3.dns-3450.svc.cluster.local from pod dns-3450/dns-test-c9dbc46c-f541-4973-875e-5d42391540c7 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 7 23:40:02.194: INFO: File jessie_udp@dns-test-service-3.dns-3450.svc.cluster.local from pod dns-3450/dns-test-c9dbc46c-f541-4973-875e-5d42391540c7 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 7 23:40:02.194: INFO: Lookups using dns-3450/dns-test-c9dbc46c-f541-4973-875e-5d42391540c7 failed for: [wheezy_udp@dns-test-service-3.dns-3450.svc.cluster.local jessie_udp@dns-test-service-3.dns-3450.svc.cluster.local] Apr 7 23:40:07.194: INFO: DNS probes using dns-test-c9dbc46c-f541-4973-875e-5d42391540c7 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3450.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3450.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3450.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3450.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 7 23:40:13.841: INFO: DNS probes using dns-test-d330a0c0-fdf3-4171-8b52-35e3338b8540 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:40:13.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3450" for this suite. • [SLOW TEST:44.037 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":15,"skipped":352,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:40:13.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-4c77e2ff-38fe-44d5-989a-42cdec4f301c STEP: Creating a pod to test consume secrets Apr 7 23:40:14.455: INFO: Waiting up to 5m0s for pod "pod-secrets-c96cbc23-a996-4436-999e-cc886a6d581d" in namespace "secrets-3266" to be "Succeeded or Failed" Apr 7 23:40:14.458: INFO: Pod "pod-secrets-c96cbc23-a996-4436-999e-cc886a6d581d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.567935ms Apr 7 23:40:16.462: INFO: Pod "pod-secrets-c96cbc23-a996-4436-999e-cc886a6d581d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006353008s Apr 7 23:40:18.465: INFO: Pod "pod-secrets-c96cbc23-a996-4436-999e-cc886a6d581d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009917186s STEP: Saw pod success Apr 7 23:40:18.465: INFO: Pod "pod-secrets-c96cbc23-a996-4436-999e-cc886a6d581d" satisfied condition "Succeeded or Failed" Apr 7 23:40:18.468: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-c96cbc23-a996-4436-999e-cc886a6d581d container secret-env-test: STEP: delete the pod Apr 7 23:40:18.489: INFO: Waiting for pod pod-secrets-c96cbc23-a996-4436-999e-cc886a6d581d to disappear Apr 7 23:40:18.507: INFO: Pod pod-secrets-c96cbc23-a996-4436-999e-cc886a6d581d no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:40:18.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3266" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":16,"skipped":385,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:40:18.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 7 23:40:22.644: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:40:22.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4927" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":17,"skipped":386,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:40:22.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-2244bb91-d023-4f3f-a129-41f09bd16596 STEP: Creating a pod to test consume secrets Apr 7 23:40:22.817: INFO: Waiting up to 5m0s for pod "pod-secrets-098f8576-143a-481c-971f-634f00574eef" in namespace "secrets-20" to be "Succeeded or Failed" Apr 7 23:40:22.821: INFO: Pod "pod-secrets-098f8576-143a-481c-971f-634f00574eef": Phase="Pending", Reason="", readiness=false. Elapsed: 3.579682ms Apr 7 23:40:24.824: INFO: Pod "pod-secrets-098f8576-143a-481c-971f-634f00574eef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007143679s Apr 7 23:40:26.828: INFO: Pod "pod-secrets-098f8576-143a-481c-971f-634f00574eef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010693057s STEP: Saw pod success Apr 7 23:40:26.828: INFO: Pod "pod-secrets-098f8576-143a-481c-971f-634f00574eef" satisfied condition "Succeeded or Failed" Apr 7 23:40:26.831: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-098f8576-143a-481c-971f-634f00574eef container secret-volume-test: STEP: delete the pod Apr 7 23:40:26.846: INFO: Waiting for pod pod-secrets-098f8576-143a-481c-971f-634f00574eef to disappear Apr 7 23:40:26.851: INFO: Pod pod-secrets-098f8576-143a-481c-971f-634f00574eef no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:40:26.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-20" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":18,"skipped":399,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:40:26.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:40:43.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4741" for this suite. • [SLOW TEST:16.298 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":19,"skipped":404,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:40:43.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 7 23:40:43.258: INFO: Waiting up to 5m0s for pod "pod-6a34c52e-2650-4444-aa79-db37c2fbaaf1" in namespace "emptydir-5963" to be "Succeeded or Failed" Apr 7 23:40:43.262: INFO: Pod "pod-6a34c52e-2650-4444-aa79-db37c2fbaaf1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.900377ms Apr 7 23:40:45.265: INFO: Pod "pod-6a34c52e-2650-4444-aa79-db37c2fbaaf1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006997853s Apr 7 23:40:47.299: INFO: Pod "pod-6a34c52e-2650-4444-aa79-db37c2fbaaf1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041192572s STEP: Saw pod success Apr 7 23:40:47.299: INFO: Pod "pod-6a34c52e-2650-4444-aa79-db37c2fbaaf1" satisfied condition "Succeeded or Failed" Apr 7 23:40:47.302: INFO: Trying to get logs from node latest-worker pod pod-6a34c52e-2650-4444-aa79-db37c2fbaaf1 container test-container: STEP: delete the pod Apr 7 23:40:47.333: INFO: Waiting for pod pod-6a34c52e-2650-4444-aa79-db37c2fbaaf1 to disappear Apr 7 23:40:47.360: INFO: Pod pod-6a34c52e-2650-4444-aa79-db37c2fbaaf1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:40:47.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5963" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":20,"skipped":412,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:40:47.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:40:54.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4262" for this suite. • [SLOW TEST:7.121 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":21,"skipped":416,"failed":0} SS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:40:54.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 7 23:40:54.593: INFO: Create a RollingUpdate DaemonSet Apr 7 23:40:54.597: INFO: Check that daemon pods launch on every node of the cluster Apr 7 23:40:54.602: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 23:40:54.606: INFO: Number of nodes with available pods: 0 Apr 7 23:40:54.606: INFO: Node latest-worker is running more than one daemon pod Apr 7 23:40:55.610: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 23:40:55.614: INFO: Number of nodes with available pods: 0 Apr 7 23:40:55.614: INFO: Node latest-worker is running more than one daemon pod Apr 7 23:40:56.611: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 23:40:56.614: INFO: Number of nodes with available pods: 0 Apr 7 23:40:56.614: INFO: Node latest-worker is running more than one daemon pod Apr 7 23:40:57.610: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 23:40:57.614: INFO: Number of nodes with available pods: 0 Apr 7 23:40:57.614: INFO: Node latest-worker is running more than one daemon pod Apr 7 23:40:58.611: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 23:40:58.615: INFO: Number of nodes with available pods: 1 Apr 7 23:40:58.615: INFO: Node latest-worker is running more than one daemon pod Apr 7 23:40:59.630: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 23:40:59.633: INFO: Number of nodes with available pods: 2 Apr 7 23:40:59.633: INFO: Number of running nodes: 2, number of available pods: 2 Apr 7 23:40:59.633: INFO: Update the DaemonSet to trigger a rollout Apr 7 23:40:59.638: INFO: Updating DaemonSet daemon-set Apr 7 23:41:13.664: INFO: Roll back the DaemonSet before rollout is complete Apr 7 23:41:13.670: INFO: Updating DaemonSet daemon-set Apr 7 23:41:13.670: INFO: Make sure DaemonSet rollback is complete Apr 7 23:41:13.676: INFO: Wrong image for pod: daemon-set-tgslt. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 7 23:41:13.676: INFO: Pod daemon-set-tgslt is not available Apr 7 23:41:13.694: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 23:41:14.699: INFO: Wrong image for pod: daemon-set-tgslt. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 7 23:41:14.699: INFO: Pod daemon-set-tgslt is not available Apr 7 23:41:14.704: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 23:41:15.815: INFO: Wrong image for pod: daemon-set-tgslt. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 7 23:41:15.815: INFO: Pod daemon-set-tgslt is not available Apr 7 23:41:15.820: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 23:41:16.699: INFO: Pod daemon-set-cjwvn is not available Apr 7 23:41:16.703: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-799, will wait for the garbage collector to delete the pods Apr 7 23:41:16.768: INFO: Deleting DaemonSet.extensions daemon-set took: 6.286048ms Apr 7 23:41:17.068: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.21928ms Apr 7 23:41:23.084: INFO: Number of nodes with available pods: 0 Apr 7 23:41:23.084: INFO: Number of running nodes: 0, number of available pods: 0 Apr 7 23:41:23.090: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-799/daemonsets","resourceVersion":"6266198"},"items":null} Apr 7 23:41:23.093: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-799/pods","resourceVersion":"6266198"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:41:23.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-799" for this suite. • [SLOW TEST:28.624 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":22,"skipped":418,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:41:23.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name secret-emptykey-test-fc07d410-bd10-4074-bf72-7a5e65a6e3bd [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:41:23.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6205" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":23,"skipped":483,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:41:23.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 7 23:41:23.268: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-7256 /api/v1/namespaces/watch-7256/configmaps/e2e-watch-test-resource-version 88f46f77-b774-41e7-96f5-d95f2e8982e5 6266211 0 2020-04-07 23:41:23 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 7 23:41:23.268: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-7256 /api/v1/namespaces/watch-7256/configmaps/e2e-watch-test-resource-version 88f46f77-b774-41e7-96f5-d95f2e8982e5 6266212 0 2020-04-07 23:41:23 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:41:23.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7256" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":24,"skipped":490,"failed":0} SS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:41:23.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 7 23:41:23.326: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Apr 7 23:41:25.368: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:41:26.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-630" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":25,"skipped":492,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:41:26.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service nodeport-test with type=NodePort in namespace services-9847 STEP: creating replication controller nodeport-test in namespace services-9847 I0407 23:41:27.029777 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-9847, replica count: 2 I0407 23:41:30.080194 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0407 23:41:33.080488 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 7 23:41:33.080: INFO: Creating new exec pod Apr 7 23:41:38.115: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-9847 execpodvjw6f -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Apr 7 23:41:38.348: INFO: stderr: "I0407 23:41:38.271635 352 log.go:172] (0xc000a98e70) (0xc000a2a460) Create stream\nI0407 23:41:38.271700 352 log.go:172] (0xc000a98e70) (0xc000a2a460) Stream added, broadcasting: 1\nI0407 23:41:38.274500 352 log.go:172] (0xc000a98e70) Reply frame received for 1\nI0407 23:41:38.274559 352 log.go:172] (0xc000a98e70) (0xc000a78280) Create stream\nI0407 23:41:38.274576 352 log.go:172] (0xc000a98e70) (0xc000a78280) Stream added, broadcasting: 3\nI0407 23:41:38.275629 352 log.go:172] (0xc000a98e70) Reply frame received for 3\nI0407 23:41:38.275663 352 log.go:172] (0xc000a98e70) (0xc000a2a500) Create stream\nI0407 23:41:38.275674 352 log.go:172] (0xc000a98e70) (0xc000a2a500) Stream added, broadcasting: 5\nI0407 23:41:38.276809 352 log.go:172] (0xc000a98e70) Reply frame received for 5\nI0407 23:41:38.339540 352 log.go:172] (0xc000a98e70) Data frame received for 5\nI0407 23:41:38.339575 352 log.go:172] (0xc000a2a500) (5) Data frame handling\nI0407 23:41:38.339596 352 log.go:172] (0xc000a2a500) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0407 23:41:38.340493 352 log.go:172] (0xc000a98e70) Data frame received for 5\nI0407 23:41:38.340520 352 log.go:172] (0xc000a2a500) (5) Data frame handling\nI0407 23:41:38.340542 352 log.go:172] (0xc000a2a500) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0407 23:41:38.340847 352 log.go:172] (0xc000a98e70) Data frame received for 5\nI0407 23:41:38.340883 352 log.go:172] (0xc000a2a500) (5) Data frame handling\nI0407 23:41:38.340906 352 log.go:172] (0xc000a98e70) Data frame received for 3\nI0407 23:41:38.340934 352 log.go:172] (0xc000a78280) (3) Data frame handling\nI0407 23:41:38.342903 352 log.go:172] (0xc000a98e70) Data frame received for 1\nI0407 23:41:38.342928 352 log.go:172] (0xc000a2a460) (1) Data frame handling\nI0407 23:41:38.342947 352 log.go:172] (0xc000a2a460) (1) Data frame sent\nI0407 23:41:38.342969 352 log.go:172] (0xc000a98e70) (0xc000a2a460) Stream removed, broadcasting: 1\nI0407 23:41:38.342998 352 log.go:172] (0xc000a98e70) Go away received\nI0407 23:41:38.343487 352 log.go:172] (0xc000a98e70) (0xc000a2a460) Stream removed, broadcasting: 1\nI0407 23:41:38.343511 352 log.go:172] (0xc000a98e70) (0xc000a78280) Stream removed, broadcasting: 3\nI0407 23:41:38.343524 352 log.go:172] (0xc000a98e70) (0xc000a2a500) Stream removed, broadcasting: 5\n" Apr 7 23:41:38.348: INFO: stdout: "" Apr 7 23:41:38.349: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-9847 execpodvjw6f -- /bin/sh -x -c nc -zv -t -w 2 10.96.142.142 80' Apr 7 23:41:38.542: INFO: stderr: "I0407 23:41:38.466811 371 log.go:172] (0xc000aa0fd0) (0xc000928820) Create stream\nI0407 23:41:38.466871 371 log.go:172] (0xc000aa0fd0) (0xc000928820) Stream added, broadcasting: 1\nI0407 23:41:38.471991 371 log.go:172] (0xc000aa0fd0) Reply frame received for 1\nI0407 23:41:38.472030 371 log.go:172] (0xc000aa0fd0) (0xc000633680) Create stream\nI0407 23:41:38.472039 371 log.go:172] (0xc000aa0fd0) (0xc000633680) Stream added, broadcasting: 3\nI0407 23:41:38.472840 371 log.go:172] (0xc000aa0fd0) Reply frame received for 3\nI0407 23:41:38.472880 371 log.go:172] (0xc000aa0fd0) (0xc0004f2aa0) Create stream\nI0407 23:41:38.472893 371 log.go:172] (0xc000aa0fd0) (0xc0004f2aa0) Stream added, broadcasting: 5\nI0407 23:41:38.473931 371 log.go:172] (0xc000aa0fd0) Reply frame received for 5\nI0407 23:41:38.536897 371 log.go:172] (0xc000aa0fd0) Data frame received for 5\nI0407 23:41:38.536947 371 log.go:172] (0xc0004f2aa0) (5) Data frame handling\nI0407 23:41:38.536974 371 log.go:172] (0xc0004f2aa0) (5) Data frame sent\nI0407 23:41:38.536993 371 log.go:172] (0xc000aa0fd0) Data frame received for 5\n+ nc -zv -t -w 2 10.96.142.142 80\nConnection to 10.96.142.142 80 port [tcp/http] succeeded!\nI0407 23:41:38.537034 371 log.go:172] (0xc000aa0fd0) Data frame received for 3\nI0407 23:41:38.537086 371 log.go:172] (0xc000633680) (3) Data frame handling\nI0407 23:41:38.537277 371 log.go:172] (0xc0004f2aa0) (5) Data frame handling\nI0407 23:41:38.538971 371 log.go:172] (0xc000aa0fd0) Data frame received for 1\nI0407 23:41:38.538985 371 log.go:172] (0xc000928820) (1) Data frame handling\nI0407 23:41:38.538995 371 log.go:172] (0xc000928820) (1) Data frame sent\nI0407 23:41:38.539005 371 log.go:172] (0xc000aa0fd0) (0xc000928820) Stream removed, broadcasting: 1\nI0407 23:41:38.539270 371 log.go:172] (0xc000aa0fd0) (0xc000928820) Stream removed, broadcasting: 1\nI0407 23:41:38.539285 371 log.go:172] (0xc000aa0fd0) (0xc000633680) Stream removed, broadcasting: 3\nI0407 23:41:38.539398 371 log.go:172] (0xc000aa0fd0) (0xc0004f2aa0) Stream removed, broadcasting: 5\n" Apr 7 23:41:38.542: INFO: stdout: "" Apr 7 23:41:38.542: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-9847 execpodvjw6f -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 32630' Apr 7 23:41:38.743: INFO: stderr: "I0407 23:41:38.665950 391 log.go:172] (0xc000a50630) (0xc0008e60a0) Create stream\nI0407 23:41:38.666002 391 log.go:172] (0xc000a50630) (0xc0008e60a0) Stream added, broadcasting: 1\nI0407 23:41:38.668621 391 log.go:172] (0xc000a50630) Reply frame received for 1\nI0407 23:41:38.668668 391 log.go:172] (0xc000a50630) (0xc000908000) Create stream\nI0407 23:41:38.668684 391 log.go:172] (0xc000a50630) (0xc000908000) Stream added, broadcasting: 3\nI0407 23:41:38.669755 391 log.go:172] (0xc000a50630) Reply frame received for 3\nI0407 23:41:38.669805 391 log.go:172] (0xc000a50630) (0xc000675220) Create stream\nI0407 23:41:38.669823 391 log.go:172] (0xc000a50630) (0xc000675220) Stream added, broadcasting: 5\nI0407 23:41:38.670856 391 log.go:172] (0xc000a50630) Reply frame received for 5\nI0407 23:41:38.736642 391 log.go:172] (0xc000a50630) Data frame received for 5\nI0407 23:41:38.736676 391 log.go:172] (0xc000675220) (5) Data frame handling\nI0407 23:41:38.736698 391 log.go:172] (0xc000675220) (5) Data frame sent\nI0407 23:41:38.736709 391 log.go:172] (0xc000a50630) Data frame received for 5\n+ nc -zv -t -w 2 172.17.0.13 32630\nI0407 23:41:38.736726 391 log.go:172] (0xc000675220) (5) Data frame handling\nI0407 23:41:38.736775 391 log.go:172] (0xc000675220) (5) Data frame sent\nConnection to 172.17.0.13 32630 port [tcp/32630] succeeded!\nI0407 23:41:38.737054 391 log.go:172] (0xc000a50630) Data frame received for 5\nI0407 23:41:38.737091 391 log.go:172] (0xc000675220) (5) Data frame handling\nI0407 23:41:38.737244 391 log.go:172] (0xc000a50630) Data frame received for 3\nI0407 23:41:38.737259 391 log.go:172] (0xc000908000) (3) Data frame handling\nI0407 23:41:38.738850 391 log.go:172] (0xc000a50630) Data frame received for 1\nI0407 23:41:38.738884 391 log.go:172] (0xc0008e60a0) (1) Data frame handling\nI0407 23:41:38.738910 391 log.go:172] (0xc0008e60a0) (1) Data frame sent\nI0407 23:41:38.738927 391 log.go:172] (0xc000a50630) (0xc0008e60a0) Stream removed, broadcasting: 1\nI0407 23:41:38.738949 391 log.go:172] (0xc000a50630) Go away received\nI0407 23:41:38.739289 391 log.go:172] (0xc000a50630) (0xc0008e60a0) Stream removed, broadcasting: 1\nI0407 23:41:38.739307 391 log.go:172] (0xc000a50630) (0xc000908000) Stream removed, broadcasting: 3\nI0407 23:41:38.739315 391 log.go:172] (0xc000a50630) (0xc000675220) Stream removed, broadcasting: 5\n" Apr 7 23:41:38.743: INFO: stdout: "" Apr 7 23:41:38.743: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-9847 execpodvjw6f -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 32630' Apr 7 23:41:38.946: INFO: stderr: "I0407 23:41:38.874390 410 log.go:172] (0xc0007fe9a0) (0xc0007f61e0) Create stream\nI0407 23:41:38.874445 410 log.go:172] (0xc0007fe9a0) (0xc0007f61e0) Stream added, broadcasting: 1\nI0407 23:41:38.876981 410 log.go:172] (0xc0007fe9a0) Reply frame received for 1\nI0407 23:41:38.877025 410 log.go:172] (0xc0007fe9a0) (0xc0009ea000) Create stream\nI0407 23:41:38.877043 410 log.go:172] (0xc0007fe9a0) (0xc0009ea000) Stream added, broadcasting: 3\nI0407 23:41:38.878239 410 log.go:172] (0xc0007fe9a0) Reply frame received for 3\nI0407 23:41:38.878298 410 log.go:172] (0xc0007fe9a0) (0xc0009ea0a0) Create stream\nI0407 23:41:38.878311 410 log.go:172] (0xc0007fe9a0) (0xc0009ea0a0) Stream added, broadcasting: 5\nI0407 23:41:38.879266 410 log.go:172] (0xc0007fe9a0) Reply frame received for 5\nI0407 23:41:38.939136 410 log.go:172] (0xc0007fe9a0) Data frame received for 5\nI0407 23:41:38.939164 410 log.go:172] (0xc0009ea0a0) (5) Data frame handling\nI0407 23:41:38.939178 410 log.go:172] (0xc0009ea0a0) (5) Data frame sent\nI0407 23:41:38.939197 410 log.go:172] (0xc0007fe9a0) Data frame received for 5\nI0407 23:41:38.939216 410 log.go:172] (0xc0009ea0a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 32630\nConnection to 172.17.0.12 32630 port [tcp/32630] succeeded!\nI0407 23:41:38.939334 410 log.go:172] (0xc0007fe9a0) Data frame received for 3\nI0407 23:41:38.939354 410 log.go:172] (0xc0009ea000) (3) Data frame handling\nI0407 23:41:38.940758 410 log.go:172] (0xc0007fe9a0) Data frame received for 1\nI0407 23:41:38.940785 410 log.go:172] (0xc0007f61e0) (1) Data frame handling\nI0407 23:41:38.940797 410 log.go:172] (0xc0007f61e0) (1) Data frame sent\nI0407 23:41:38.940907 410 log.go:172] (0xc0007fe9a0) (0xc0007f61e0) Stream removed, broadcasting: 1\nI0407 23:41:38.941250 410 log.go:172] (0xc0007fe9a0) Go away received\nI0407 23:41:38.941397 410 log.go:172] (0xc0007fe9a0) (0xc0007f61e0) Stream removed, broadcasting: 1\nI0407 23:41:38.941417 410 log.go:172] (0xc0007fe9a0) (0xc0009ea000) Stream removed, broadcasting: 3\nI0407 23:41:38.941437 410 log.go:172] (0xc0007fe9a0) (0xc0009ea0a0) Stream removed, broadcasting: 5\n" Apr 7 23:41:38.946: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:41:38.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9847" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:12.564 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":26,"skipped":545,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:41:38.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-downwardapi-bg4g STEP: Creating a pod to test atomic-volume-subpath Apr 7 23:41:39.082: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-bg4g" in namespace "subpath-3310" to be "Succeeded or Failed" Apr 7 23:41:39.086: INFO: Pod "pod-subpath-test-downwardapi-bg4g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.730011ms Apr 7 23:41:41.090: INFO: Pod "pod-subpath-test-downwardapi-bg4g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008226186s Apr 7 23:41:43.094: INFO: Pod "pod-subpath-test-downwardapi-bg4g": Phase="Running", Reason="", readiness=true. Elapsed: 4.012401003s Apr 7 23:41:45.098: INFO: Pod "pod-subpath-test-downwardapi-bg4g": Phase="Running", Reason="", readiness=true. Elapsed: 6.016831381s Apr 7 23:41:47.103: INFO: Pod "pod-subpath-test-downwardapi-bg4g": Phase="Running", Reason="", readiness=true. Elapsed: 8.021054856s Apr 7 23:41:49.113: INFO: Pod "pod-subpath-test-downwardapi-bg4g": Phase="Running", Reason="", readiness=true. Elapsed: 10.03094679s Apr 7 23:41:51.117: INFO: Pod "pod-subpath-test-downwardapi-bg4g": Phase="Running", Reason="", readiness=true. Elapsed: 12.035405849s Apr 7 23:41:53.121: INFO: Pod "pod-subpath-test-downwardapi-bg4g": Phase="Running", Reason="", readiness=true. Elapsed: 14.039742114s Apr 7 23:41:55.126: INFO: Pod "pod-subpath-test-downwardapi-bg4g": Phase="Running", Reason="", readiness=true. Elapsed: 16.044248338s Apr 7 23:41:57.130: INFO: Pod "pod-subpath-test-downwardapi-bg4g": Phase="Running", Reason="", readiness=true. Elapsed: 18.048601756s Apr 7 23:41:59.134: INFO: Pod "pod-subpath-test-downwardapi-bg4g": Phase="Running", Reason="", readiness=true. Elapsed: 20.052465812s Apr 7 23:42:01.138: INFO: Pod "pod-subpath-test-downwardapi-bg4g": Phase="Running", Reason="", readiness=true. Elapsed: 22.05678358s Apr 7 23:42:03.143: INFO: Pod "pod-subpath-test-downwardapi-bg4g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.061082897s STEP: Saw pod success Apr 7 23:42:03.143: INFO: Pod "pod-subpath-test-downwardapi-bg4g" satisfied condition "Succeeded or Failed" Apr 7 23:42:03.146: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-downwardapi-bg4g container test-container-subpath-downwardapi-bg4g: STEP: delete the pod Apr 7 23:42:03.169: INFO: Waiting for pod pod-subpath-test-downwardapi-bg4g to disappear Apr 7 23:42:03.174: INFO: Pod pod-subpath-test-downwardapi-bg4g no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-bg4g Apr 7 23:42:03.174: INFO: Deleting pod "pod-subpath-test-downwardapi-bg4g" in namespace "subpath-3310" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:42:03.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3310" for this suite. • [SLOW TEST:24.231 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":27,"skipped":559,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:42:03.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 7 23:42:03.281: INFO: Waiting up to 5m0s for pod "pod-8be13ab2-cd5f-4e3d-b3f4-04f2bab7f19c" in namespace "emptydir-1852" to be "Succeeded or Failed" Apr 7 23:42:03.301: INFO: Pod "pod-8be13ab2-cd5f-4e3d-b3f4-04f2bab7f19c": Phase="Pending", Reason="", readiness=false. Elapsed: 19.004028ms Apr 7 23:42:05.305: INFO: Pod "pod-8be13ab2-cd5f-4e3d-b3f4-04f2bab7f19c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023503497s Apr 7 23:42:07.309: INFO: Pod "pod-8be13ab2-cd5f-4e3d-b3f4-04f2bab7f19c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027871223s STEP: Saw pod success Apr 7 23:42:07.309: INFO: Pod "pod-8be13ab2-cd5f-4e3d-b3f4-04f2bab7f19c" satisfied condition "Succeeded or Failed" Apr 7 23:42:07.312: INFO: Trying to get logs from node latest-worker pod pod-8be13ab2-cd5f-4e3d-b3f4-04f2bab7f19c container test-container: STEP: delete the pod Apr 7 23:42:07.374: INFO: Waiting for pod pod-8be13ab2-cd5f-4e3d-b3f4-04f2bab7f19c to disappear Apr 7 23:42:07.380: INFO: Pod pod-8be13ab2-cd5f-4e3d-b3f4-04f2bab7f19c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:42:07.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1852" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":28,"skipped":567,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:42:07.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-a856834f-6361-4e6d-ba54-72fab5ef4627 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-a856834f-6361-4e6d-ba54-72fab5ef4627 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:43:35.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2041" for this suite. • [SLOW TEST:88.551 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":29,"skipped":575,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:43:35.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 7 23:43:36.045: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 23:43:36.058: INFO: Number of nodes with available pods: 0 Apr 7 23:43:36.058: INFO: Node latest-worker is running more than one daemon pod Apr 7 23:43:37.062: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 23:43:37.065: INFO: Number of nodes with available pods: 0 Apr 7 23:43:37.065: INFO: Node latest-worker is running more than one daemon pod Apr 7 23:43:38.147: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 23:43:38.164: INFO: Number of nodes with available pods: 0 Apr 7 23:43:38.164: INFO: Node latest-worker is running more than one daemon pod Apr 7 23:43:39.117: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 23:43:39.121: INFO: Number of nodes with available pods: 0 Apr 7 23:43:39.121: INFO: Node latest-worker is running more than one daemon pod Apr 7 23:43:40.063: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 23:43:40.066: INFO: Number of nodes with available pods: 1 Apr 7 23:43:40.067: INFO: Node latest-worker2 is running more than one daemon pod Apr 7 23:43:41.062: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 23:43:41.066: INFO: Number of nodes with available pods: 2 Apr 7 23:43:41.066: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 7 23:43:41.080: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 23:43:41.088: INFO: Number of nodes with available pods: 1 Apr 7 23:43:41.088: INFO: Node latest-worker2 is running more than one daemon pod Apr 7 23:43:42.093: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 23:43:42.097: INFO: Number of nodes with available pods: 1 Apr 7 23:43:42.097: INFO: Node latest-worker2 is running more than one daemon pod Apr 7 23:43:43.093: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 23:43:43.096: INFO: Number of nodes with available pods: 1 Apr 7 23:43:43.096: INFO: Node latest-worker2 is running more than one daemon pod Apr 7 23:43:44.093: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 23:43:44.097: INFO: Number of nodes with available pods: 1 Apr 7 23:43:44.097: INFO: Node latest-worker2 is running more than one daemon pod Apr 7 23:43:45.093: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 23:43:45.097: INFO: Number of nodes with available pods: 1 Apr 7 23:43:45.097: INFO: Node latest-worker2 is running more than one daemon pod Apr 7 23:43:46.093: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 23:43:46.097: INFO: Number of nodes with available pods: 1 Apr 7 23:43:46.097: INFO: Node latest-worker2 is running more than one daemon pod Apr 7 23:43:47.093: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 23:43:47.097: INFO: Number of nodes with available pods: 1 Apr 7 23:43:47.097: INFO: Node latest-worker2 is running more than one daemon pod Apr 7 23:43:48.092: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 23:43:48.096: INFO: Number of nodes with available pods: 1 Apr 7 23:43:48.096: INFO: Node latest-worker2 is running more than one daemon pod Apr 7 23:43:49.095: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 23:43:49.098: INFO: Number of nodes with available pods: 1 Apr 7 23:43:49.098: INFO: Node latest-worker2 is running more than one daemon pod Apr 7 23:43:50.093: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 23:43:50.097: INFO: Number of nodes with available pods: 1 Apr 7 23:43:50.097: INFO: Node latest-worker2 is running more than one daemon pod Apr 7 23:43:51.111: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 23:43:51.124: INFO: Number of nodes with available pods: 1 Apr 7 23:43:51.124: INFO: Node latest-worker2 is running more than one daemon pod Apr 7 23:43:52.094: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 23:43:52.098: INFO: Number of nodes with available pods: 1 Apr 7 23:43:52.098: INFO: Node latest-worker2 is running more than one daemon pod Apr 7 23:43:53.092: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 23:43:53.095: INFO: Number of nodes with available pods: 1 Apr 7 23:43:53.095: INFO: Node latest-worker2 is running more than one daemon pod Apr 7 23:43:54.093: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 23:43:54.095: INFO: Number of nodes with available pods: 1 Apr 7 23:43:54.095: INFO: Node latest-worker2 is running more than one daemon pod Apr 7 23:43:55.092: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 23:43:55.095: INFO: Number of nodes with available pods: 1 Apr 7 23:43:55.095: INFO: Node latest-worker2 is running more than one daemon pod Apr 7 23:43:56.093: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 23:43:56.096: INFO: Number of nodes with available pods: 1 Apr 7 23:43:56.097: INFO: Node latest-worker2 is running more than one daemon pod Apr 7 23:43:57.093: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 23:43:57.096: INFO: Number of nodes with available pods: 2 Apr 7 23:43:57.096: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5040, will wait for the garbage collector to delete the pods Apr 7 23:43:57.158: INFO: Deleting DaemonSet.extensions daemon-set took: 6.607175ms Apr 7 23:43:57.558: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.273639ms Apr 7 23:44:03.061: INFO: Number of nodes with available pods: 0 Apr 7 23:44:03.061: INFO: Number of running nodes: 0, number of available pods: 0 Apr 7 23:44:03.064: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5040/daemonsets","resourceVersion":"6266989"},"items":null} Apr 7 23:44:03.066: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5040/pods","resourceVersion":"6266989"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:44:03.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5040" for this suite. • [SLOW TEST:27.146 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":30,"skipped":616,"failed":0} SSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:44:03.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating replication controller my-hostname-basic-077e39bf-b486-440a-997b-aa753a92bd3b Apr 7 23:44:03.210: INFO: Pod name my-hostname-basic-077e39bf-b486-440a-997b-aa753a92bd3b: Found 0 pods out of 1 Apr 7 23:44:08.240: INFO: Pod name my-hostname-basic-077e39bf-b486-440a-997b-aa753a92bd3b: Found 1 pods out of 1 Apr 7 23:44:08.241: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-077e39bf-b486-440a-997b-aa753a92bd3b" are running Apr 7 23:44:08.278: INFO: Pod "my-hostname-basic-077e39bf-b486-440a-997b-aa753a92bd3b-b2lpz" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-07 23:44:03 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-07 23:44:06 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-07 23:44:06 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-07 23:44:03 +0000 UTC Reason: Message:}]) Apr 7 23:44:08.278: INFO: Trying to dial the pod Apr 7 23:44:13.290: INFO: Controller my-hostname-basic-077e39bf-b486-440a-997b-aa753a92bd3b: Got expected result from replica 1 [my-hostname-basic-077e39bf-b486-440a-997b-aa753a92bd3b-b2lpz]: "my-hostname-basic-077e39bf-b486-440a-997b-aa753a92bd3b-b2lpz", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:44:13.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8586" for this suite. • [SLOW TEST:10.213 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":31,"skipped":622,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:44:13.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-9716be98-43d5-449f-bb50-93667292a418 STEP: Creating a pod to test consume configMaps Apr 7 23:44:13.370: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-312adcd0-8ca2-490e-8291-ce6a86f9f9e1" in namespace "projected-3839" to be "Succeeded or Failed" Apr 7 23:44:13.374: INFO: Pod "pod-projected-configmaps-312adcd0-8ca2-490e-8291-ce6a86f9f9e1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.690854ms Apr 7 23:44:15.378: INFO: Pod "pod-projected-configmaps-312adcd0-8ca2-490e-8291-ce6a86f9f9e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007878882s Apr 7 23:44:17.382: INFO: Pod "pod-projected-configmaps-312adcd0-8ca2-490e-8291-ce6a86f9f9e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011643338s STEP: Saw pod success Apr 7 23:44:17.382: INFO: Pod "pod-projected-configmaps-312adcd0-8ca2-490e-8291-ce6a86f9f9e1" satisfied condition "Succeeded or Failed" Apr 7 23:44:17.385: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-312adcd0-8ca2-490e-8291-ce6a86f9f9e1 container projected-configmap-volume-test: STEP: delete the pod Apr 7 23:44:17.418: INFO: Waiting for pod pod-projected-configmaps-312adcd0-8ca2-490e-8291-ce6a86f9f9e1 to disappear Apr 7 23:44:17.423: INFO: Pod pod-projected-configmaps-312adcd0-8ca2-490e-8291-ce6a86f9f9e1 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:44:17.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3839" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":32,"skipped":624,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:44:17.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-22c8c1c6-efcd-43f0-a9e7-f9890944e36e STEP: Creating a pod to test consume secrets Apr 7 23:44:17.570: INFO: Waiting up to 5m0s for pod "pod-secrets-4a320893-67c2-4299-8e83-f1e95aadf9db" in namespace "secrets-8088" to be "Succeeded or Failed" Apr 7 23:44:17.626: INFO: Pod "pod-secrets-4a320893-67c2-4299-8e83-f1e95aadf9db": Phase="Pending", Reason="", readiness=false. Elapsed: 55.974592ms Apr 7 23:44:19.630: INFO: Pod "pod-secrets-4a320893-67c2-4299-8e83-f1e95aadf9db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060182759s Apr 7 23:44:21.634: INFO: Pod "pod-secrets-4a320893-67c2-4299-8e83-f1e95aadf9db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063976649s STEP: Saw pod success Apr 7 23:44:21.634: INFO: Pod "pod-secrets-4a320893-67c2-4299-8e83-f1e95aadf9db" satisfied condition "Succeeded or Failed" Apr 7 23:44:21.637: INFO: Trying to get logs from node latest-worker pod pod-secrets-4a320893-67c2-4299-8e83-f1e95aadf9db container secret-volume-test: STEP: delete the pod Apr 7 23:44:21.693: INFO: Waiting for pod pod-secrets-4a320893-67c2-4299-8e83-f1e95aadf9db to disappear Apr 7 23:44:21.700: INFO: Pod pod-secrets-4a320893-67c2-4299-8e83-f1e95aadf9db no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:44:21.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8088" for this suite. STEP: Destroying namespace "secret-namespace-3731" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":33,"skipped":648,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:44:21.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 7 23:44:21.780: INFO: Waiting up to 5m0s for pod "pod-fcbf193d-4067-435f-bb5c-de408bc05562" in namespace "emptydir-7999" to be "Succeeded or Failed" Apr 7 23:44:21.783: INFO: Pod "pod-fcbf193d-4067-435f-bb5c-de408bc05562": Phase="Pending", Reason="", readiness=false. Elapsed: 3.666841ms Apr 7 23:44:23.793: INFO: Pod "pod-fcbf193d-4067-435f-bb5c-de408bc05562": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013589667s Apr 7 23:44:25.798: INFO: Pod "pod-fcbf193d-4067-435f-bb5c-de408bc05562": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018080359s STEP: Saw pod success Apr 7 23:44:25.798: INFO: Pod "pod-fcbf193d-4067-435f-bb5c-de408bc05562" satisfied condition "Succeeded or Failed" Apr 7 23:44:25.801: INFO: Trying to get logs from node latest-worker2 pod pod-fcbf193d-4067-435f-bb5c-de408bc05562 container test-container: STEP: delete the pod Apr 7 23:44:25.855: INFO: Waiting for pod pod-fcbf193d-4067-435f-bb5c-de408bc05562 to disappear Apr 7 23:44:25.877: INFO: Pod pod-fcbf193d-4067-435f-bb5c-de408bc05562 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:44:25.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7999" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":34,"skipped":657,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:44:25.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 7 23:44:26.776: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 7 23:44:28.786: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721899866, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721899866, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721899866, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721899866, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 7 23:44:31.818: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 7 23:44:31.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:44:33.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3938" for this suite. STEP: Destroying namespace "webhook-3938-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.212 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":35,"skipped":660,"failed":0} SSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:44:33.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 7 23:44:33.176: INFO: Waiting up to 5m0s for pod "downward-api-bdd25405-7c7b-4cef-833d-2d84d4bd8561" in namespace "downward-api-4345" to be "Succeeded or Failed" Apr 7 23:44:33.191: INFO: Pod "downward-api-bdd25405-7c7b-4cef-833d-2d84d4bd8561": Phase="Pending", Reason="", readiness=false. Elapsed: 15.42673ms Apr 7 23:44:35.195: INFO: Pod "downward-api-bdd25405-7c7b-4cef-833d-2d84d4bd8561": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019138196s Apr 7 23:44:37.199: INFO: Pod "downward-api-bdd25405-7c7b-4cef-833d-2d84d4bd8561": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023521981s STEP: Saw pod success Apr 7 23:44:37.199: INFO: Pod "downward-api-bdd25405-7c7b-4cef-833d-2d84d4bd8561" satisfied condition "Succeeded or Failed" Apr 7 23:44:37.202: INFO: Trying to get logs from node latest-worker pod downward-api-bdd25405-7c7b-4cef-833d-2d84d4bd8561 container dapi-container: STEP: delete the pod Apr 7 23:44:37.246: INFO: Waiting for pod downward-api-bdd25405-7c7b-4cef-833d-2d84d4bd8561 to disappear Apr 7 23:44:37.251: INFO: Pod downward-api-bdd25405-7c7b-4cef-833d-2d84d4bd8561 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:44:37.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4345" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":36,"skipped":665,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:44:37.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's args Apr 7 23:44:37.353: INFO: Waiting up to 5m0s for pod "var-expansion-d5b9ffb5-2cf3-4e34-af0c-2e9dd36baa11" in namespace "var-expansion-5561" to be "Succeeded or Failed" Apr 7 23:44:37.357: INFO: Pod "var-expansion-d5b9ffb5-2cf3-4e34-af0c-2e9dd36baa11": Phase="Pending", Reason="", readiness=false. Elapsed: 3.787384ms Apr 7 23:44:39.361: INFO: Pod "var-expansion-d5b9ffb5-2cf3-4e34-af0c-2e9dd36baa11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00780381s Apr 7 23:44:41.365: INFO: Pod "var-expansion-d5b9ffb5-2cf3-4e34-af0c-2e9dd36baa11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01184038s STEP: Saw pod success Apr 7 23:44:41.365: INFO: Pod "var-expansion-d5b9ffb5-2cf3-4e34-af0c-2e9dd36baa11" satisfied condition "Succeeded or Failed" Apr 7 23:44:41.368: INFO: Trying to get logs from node latest-worker2 pod var-expansion-d5b9ffb5-2cf3-4e34-af0c-2e9dd36baa11 container dapi-container: STEP: delete the pod Apr 7 23:44:41.411: INFO: Waiting for pod var-expansion-d5b9ffb5-2cf3-4e34-af0c-2e9dd36baa11 to disappear Apr 7 23:44:41.417: INFO: Pod var-expansion-d5b9ffb5-2cf3-4e34-af0c-2e9dd36baa11 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:44:41.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5561" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":37,"skipped":678,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:44:41.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 7 23:44:41.533: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-3ca0fcf1-ae92-4fa3-8ec0-6976d222e9ab" in namespace "security-context-test-3367" to be "Succeeded or Failed" Apr 7 23:44:41.543: INFO: Pod "busybox-readonly-false-3ca0fcf1-ae92-4fa3-8ec0-6976d222e9ab": Phase="Pending", Reason="", readiness=false. Elapsed: 10.014292ms Apr 7 23:44:43.553: INFO: Pod "busybox-readonly-false-3ca0fcf1-ae92-4fa3-8ec0-6976d222e9ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020553832s Apr 7 23:44:45.557: INFO: Pod "busybox-readonly-false-3ca0fcf1-ae92-4fa3-8ec0-6976d222e9ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024916859s Apr 7 23:44:45.558: INFO: Pod "busybox-readonly-false-3ca0fcf1-ae92-4fa3-8ec0-6976d222e9ab" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:44:45.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3367" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":38,"skipped":718,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:44:45.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 7 23:44:45.653: INFO: Waiting up to 5m0s for pod "pod-048da3ac-2f9f-42da-8d19-634eba7309ac" in namespace "emptydir-9059" to be "Succeeded or Failed" Apr 7 23:44:45.656: INFO: Pod "pod-048da3ac-2f9f-42da-8d19-634eba7309ac": Phase="Pending", Reason="", readiness=false. Elapsed: 3.473471ms Apr 7 23:44:47.660: INFO: Pod "pod-048da3ac-2f9f-42da-8d19-634eba7309ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007245355s Apr 7 23:44:49.664: INFO: Pod "pod-048da3ac-2f9f-42da-8d19-634eba7309ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011368176s STEP: Saw pod success Apr 7 23:44:49.664: INFO: Pod "pod-048da3ac-2f9f-42da-8d19-634eba7309ac" satisfied condition "Succeeded or Failed" Apr 7 23:44:49.668: INFO: Trying to get logs from node latest-worker pod pod-048da3ac-2f9f-42da-8d19-634eba7309ac container test-container: STEP: delete the pod Apr 7 23:44:49.684: INFO: Waiting for pod pod-048da3ac-2f9f-42da-8d19-634eba7309ac to disappear Apr 7 23:44:49.699: INFO: Pod pod-048da3ac-2f9f-42da-8d19-634eba7309ac no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:44:49.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9059" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":39,"skipped":743,"failed":0} S ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:44:49.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-484 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-484;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-484 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-484;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-484.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-484.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-484.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-484.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-484.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-484.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-484.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-484.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-484.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-484.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-484.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-484.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-484.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 254.34.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.34.254_udp@PTR;check="$$(dig +tcp +noall +answer +search 254.34.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.34.254_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-484 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-484;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-484 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-484;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-484.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-484.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-484.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-484.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-484.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-484.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-484.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-484.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-484.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-484.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-484.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-484.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-484.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 254.34.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.34.254_udp@PTR;check="$$(dig +tcp +noall +answer +search 254.34.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.34.254_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 7 23:44:55.826: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:44:55.828: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:44:55.831: INFO: Unable to read wheezy_udp@dns-test-service.dns-484 from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:44:55.834: INFO: Unable to read wheezy_tcp@dns-test-service.dns-484 from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:44:55.837: INFO: Unable to read wheezy_udp@dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:44:55.840: INFO: Unable to read wheezy_tcp@dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:44:55.843: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:44:55.847: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:44:55.869: INFO: Unable to read jessie_udp@dns-test-service from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:44:55.872: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:44:55.875: INFO: Unable to read jessie_udp@dns-test-service.dns-484 from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:44:55.878: INFO: Unable to read jessie_tcp@dns-test-service.dns-484 from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:44:55.882: INFO: Unable to read jessie_udp@dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:44:55.885: INFO: Unable to read jessie_tcp@dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:44:55.888: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:44:55.892: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:44:55.911: INFO: Lookups using dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-484 wheezy_tcp@dns-test-service.dns-484 wheezy_udp@dns-test-service.dns-484.svc wheezy_tcp@dns-test-service.dns-484.svc wheezy_udp@_http._tcp.dns-test-service.dns-484.svc wheezy_tcp@_http._tcp.dns-test-service.dns-484.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-484 jessie_tcp@dns-test-service.dns-484 jessie_udp@dns-test-service.dns-484.svc jessie_tcp@dns-test-service.dns-484.svc jessie_udp@_http._tcp.dns-test-service.dns-484.svc jessie_tcp@_http._tcp.dns-test-service.dns-484.svc] Apr 7 23:45:00.938: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:00.942: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:00.945: INFO: Unable to read wheezy_udp@dns-test-service.dns-484 from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:00.948: INFO: Unable to read wheezy_tcp@dns-test-service.dns-484 from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:00.951: INFO: Unable to read wheezy_udp@dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:00.953: INFO: Unable to read wheezy_tcp@dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:00.956: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:00.959: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:00.979: INFO: Unable to read jessie_udp@dns-test-service from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:00.982: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:00.984: INFO: Unable to read jessie_udp@dns-test-service.dns-484 from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:00.987: INFO: Unable to read jessie_tcp@dns-test-service.dns-484 from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:00.991: INFO: Unable to read jessie_udp@dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:00.994: INFO: Unable to read jessie_tcp@dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:00.996: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:00.999: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:01.017: INFO: Lookups using dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-484 wheezy_tcp@dns-test-service.dns-484 wheezy_udp@dns-test-service.dns-484.svc wheezy_tcp@dns-test-service.dns-484.svc wheezy_udp@_http._tcp.dns-test-service.dns-484.svc wheezy_tcp@_http._tcp.dns-test-service.dns-484.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-484 jessie_tcp@dns-test-service.dns-484 jessie_udp@dns-test-service.dns-484.svc jessie_tcp@dns-test-service.dns-484.svc jessie_udp@_http._tcp.dns-test-service.dns-484.svc jessie_tcp@_http._tcp.dns-test-service.dns-484.svc] Apr 7 23:45:05.916: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:05.919: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:05.923: INFO: Unable to read wheezy_udp@dns-test-service.dns-484 from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:05.926: INFO: Unable to read wheezy_tcp@dns-test-service.dns-484 from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:05.929: INFO: Unable to read wheezy_udp@dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:05.932: INFO: Unable to read wheezy_tcp@dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:05.934: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:05.937: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:05.960: INFO: Unable to read jessie_udp@dns-test-service from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:05.962: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:05.964: INFO: Unable to read jessie_udp@dns-test-service.dns-484 from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:05.967: INFO: Unable to read jessie_tcp@dns-test-service.dns-484 from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:05.969: INFO: Unable to read jessie_udp@dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:05.971: INFO: Unable to read jessie_tcp@dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:05.973: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:05.976: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:05.991: INFO: Lookups using dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-484 wheezy_tcp@dns-test-service.dns-484 wheezy_udp@dns-test-service.dns-484.svc wheezy_tcp@dns-test-service.dns-484.svc wheezy_udp@_http._tcp.dns-test-service.dns-484.svc wheezy_tcp@_http._tcp.dns-test-service.dns-484.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-484 jessie_tcp@dns-test-service.dns-484 jessie_udp@dns-test-service.dns-484.svc jessie_tcp@dns-test-service.dns-484.svc jessie_udp@_http._tcp.dns-test-service.dns-484.svc jessie_tcp@_http._tcp.dns-test-service.dns-484.svc] Apr 7 23:45:10.916: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:10.919: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:10.923: INFO: Unable to read wheezy_udp@dns-test-service.dns-484 from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:10.926: INFO: Unable to read wheezy_tcp@dns-test-service.dns-484 from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:10.929: INFO: Unable to read wheezy_udp@dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:10.932: INFO: Unable to read wheezy_tcp@dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:10.935: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:10.938: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:10.960: INFO: Unable to read jessie_udp@dns-test-service from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:10.962: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:10.965: INFO: Unable to read jessie_udp@dns-test-service.dns-484 from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:10.967: INFO: Unable to read jessie_tcp@dns-test-service.dns-484 from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:10.970: INFO: Unable to read jessie_udp@dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:10.973: INFO: Unable to read jessie_tcp@dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:10.975: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:10.978: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:10.995: INFO: Lookups using dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-484 wheezy_tcp@dns-test-service.dns-484 wheezy_udp@dns-test-service.dns-484.svc wheezy_tcp@dns-test-service.dns-484.svc wheezy_udp@_http._tcp.dns-test-service.dns-484.svc wheezy_tcp@_http._tcp.dns-test-service.dns-484.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-484 jessie_tcp@dns-test-service.dns-484 jessie_udp@dns-test-service.dns-484.svc jessie_tcp@dns-test-service.dns-484.svc jessie_udp@_http._tcp.dns-test-service.dns-484.svc jessie_tcp@_http._tcp.dns-test-service.dns-484.svc] Apr 7 23:45:15.916: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:15.920: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:15.924: INFO: Unable to read wheezy_udp@dns-test-service.dns-484 from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:15.927: INFO: Unable to read wheezy_tcp@dns-test-service.dns-484 from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:15.933: INFO: Unable to read wheezy_udp@dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:15.936: INFO: Unable to read wheezy_tcp@dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:15.939: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:15.942: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:15.960: INFO: Unable to read jessie_udp@dns-test-service from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:15.963: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:15.965: INFO: Unable to read jessie_udp@dns-test-service.dns-484 from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:15.968: INFO: Unable to read jessie_tcp@dns-test-service.dns-484 from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:15.971: INFO: Unable to read jessie_udp@dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:15.973: INFO: Unable to read jessie_tcp@dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:15.976: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:15.979: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:15.995: INFO: Lookups using dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-484 wheezy_tcp@dns-test-service.dns-484 wheezy_udp@dns-test-service.dns-484.svc wheezy_tcp@dns-test-service.dns-484.svc wheezy_udp@_http._tcp.dns-test-service.dns-484.svc wheezy_tcp@_http._tcp.dns-test-service.dns-484.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-484 jessie_tcp@dns-test-service.dns-484 jessie_udp@dns-test-service.dns-484.svc jessie_tcp@dns-test-service.dns-484.svc jessie_udp@_http._tcp.dns-test-service.dns-484.svc jessie_tcp@_http._tcp.dns-test-service.dns-484.svc] Apr 7 23:45:20.916: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:20.919: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:20.922: INFO: Unable to read wheezy_udp@dns-test-service.dns-484 from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:20.925: INFO: Unable to read wheezy_tcp@dns-test-service.dns-484 from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:20.928: INFO: Unable to read wheezy_udp@dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:20.931: INFO: Unable to read wheezy_tcp@dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:20.933: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:20.936: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:20.954: INFO: Unable to read jessie_udp@dns-test-service from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:20.956: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:20.959: INFO: Unable to read jessie_udp@dns-test-service.dns-484 from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:20.961: INFO: Unable to read jessie_tcp@dns-test-service.dns-484 from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:20.964: INFO: Unable to read jessie_udp@dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:20.966: INFO: Unable to read jessie_tcp@dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:20.969: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:20.972: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-484.svc from pod dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731: the server could not find the requested resource (get pods dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731) Apr 7 23:45:20.988: INFO: Lookups using dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-484 wheezy_tcp@dns-test-service.dns-484 wheezy_udp@dns-test-service.dns-484.svc wheezy_tcp@dns-test-service.dns-484.svc wheezy_udp@_http._tcp.dns-test-service.dns-484.svc wheezy_tcp@_http._tcp.dns-test-service.dns-484.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-484 jessie_tcp@dns-test-service.dns-484 jessie_udp@dns-test-service.dns-484.svc jessie_tcp@dns-test-service.dns-484.svc jessie_udp@_http._tcp.dns-test-service.dns-484.svc jessie_tcp@_http._tcp.dns-test-service.dns-484.svc] Apr 7 23:45:26.012: INFO: DNS probes using dns-484/dns-test-0c4b10d5-d408-4ae7-875f-ffede4998731 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:45:26.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-484" for this suite. • [SLOW TEST:36.819 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":40,"skipped":744,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:45:26.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:45:58.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4730" for this suite. STEP: Destroying namespace "nsdeletetest-7854" for this suite. Apr 7 23:45:58.083: INFO: Namespace nsdeletetest-7854 was already deleted STEP: Destroying namespace "nsdeletetest-9761" for this suite. • [SLOW TEST:31.559 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":41,"skipped":792,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:45:58.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Apr 7 23:45:58.159: INFO: namespace kubectl-8105 Apr 7 23:45:58.159: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8105' Apr 7 23:45:58.444: INFO: stderr: "" Apr 7 23:45:58.444: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 7 23:45:59.473: INFO: Selector matched 1 pods for map[app:agnhost] Apr 7 23:45:59.473: INFO: Found 0 / 1 Apr 7 23:46:00.449: INFO: Selector matched 1 pods for map[app:agnhost] Apr 7 23:46:00.449: INFO: Found 0 / 1 Apr 7 23:46:01.448: INFO: Selector matched 1 pods for map[app:agnhost] Apr 7 23:46:01.448: INFO: Found 0 / 1 Apr 7 23:46:02.448: INFO: Selector matched 1 pods for map[app:agnhost] Apr 7 23:46:02.448: INFO: Found 1 / 1 Apr 7 23:46:02.449: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 7 23:46:02.452: INFO: Selector matched 1 pods for map[app:agnhost] Apr 7 23:46:02.452: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 7 23:46:02.452: INFO: wait on agnhost-master startup in kubectl-8105 Apr 7 23:46:02.452: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs agnhost-master-6k2hn agnhost-master --namespace=kubectl-8105' Apr 7 23:46:02.575: INFO: stderr: "" Apr 7 23:46:02.575: INFO: stdout: "Paused\n" STEP: exposing RC Apr 7 23:46:02.575: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8105' Apr 7 23:46:02.743: INFO: stderr: "" Apr 7 23:46:02.743: INFO: stdout: "service/rm2 exposed\n" Apr 7 23:46:02.745: INFO: Service rm2 in namespace kubectl-8105 found. STEP: exposing service Apr 7 23:46:04.751: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8105' Apr 7 23:46:04.938: INFO: stderr: "" Apr 7 23:46:04.938: INFO: stdout: "service/rm3 exposed\n" Apr 7 23:46:04.941: INFO: Service rm3 in namespace kubectl-8105 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:46:06.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8105" for this suite. • [SLOW TEST:8.870 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":275,"completed":42,"skipped":795,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:46:06.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 7 23:46:07.042: INFO: Waiting up to 5m0s for pod "downward-api-5fe27f4e-7d5e-435c-a29f-12fa62a3d132" in namespace "downward-api-4553" to be "Succeeded or Failed" Apr 7 23:46:07.056: INFO: Pod "downward-api-5fe27f4e-7d5e-435c-a29f-12fa62a3d132": Phase="Pending", Reason="", readiness=false. Elapsed: 14.153958ms Apr 7 23:46:09.094: INFO: Pod "downward-api-5fe27f4e-7d5e-435c-a29f-12fa62a3d132": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05246498s Apr 7 23:46:11.099: INFO: Pod "downward-api-5fe27f4e-7d5e-435c-a29f-12fa62a3d132": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056769322s STEP: Saw pod success Apr 7 23:46:11.099: INFO: Pod "downward-api-5fe27f4e-7d5e-435c-a29f-12fa62a3d132" satisfied condition "Succeeded or Failed" Apr 7 23:46:11.101: INFO: Trying to get logs from node latest-worker2 pod downward-api-5fe27f4e-7d5e-435c-a29f-12fa62a3d132 container dapi-container: STEP: delete the pod Apr 7 23:46:11.122: INFO: Waiting for pod downward-api-5fe27f4e-7d5e-435c-a29f-12fa62a3d132 to disappear Apr 7 23:46:11.142: INFO: Pod downward-api-5fe27f4e-7d5e-435c-a29f-12fa62a3d132 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:46:11.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4553" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":43,"skipped":809,"failed":0} SSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:46:11.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3815.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-3815.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3815.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-3815.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3815.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3815.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-3815.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3815.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-3815.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3815.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 7 23:46:17.314: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:17.317: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:17.320: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:17.322: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:17.354: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:17.357: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:17.360: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:17.363: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:17.369: INFO: Lookups using dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3815.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3815.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local jessie_udp@dns-test-service-2.dns-3815.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3815.svc.cluster.local] Apr 7 23:46:22.374: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:22.378: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:22.381: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:22.384: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:22.394: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:22.398: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:22.401: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:22.404: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:22.411: INFO: Lookups using dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3815.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3815.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local jessie_udp@dns-test-service-2.dns-3815.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3815.svc.cluster.local] Apr 7 23:46:27.383: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:27.386: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:27.389: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:27.392: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:27.400: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:27.402: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:27.404: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:27.406: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:27.413: INFO: Lookups using dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3815.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3815.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local jessie_udp@dns-test-service-2.dns-3815.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3815.svc.cluster.local] Apr 7 23:46:32.374: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:32.377: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:32.381: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:32.383: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:32.391: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:32.394: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:32.397: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:32.399: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:32.404: INFO: Lookups using dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3815.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3815.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local jessie_udp@dns-test-service-2.dns-3815.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3815.svc.cluster.local] Apr 7 23:46:37.373: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:37.377: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:37.380: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:37.400: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:37.410: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:37.412: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:37.415: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:37.417: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:37.421: INFO: Lookups using dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3815.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3815.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local jessie_udp@dns-test-service-2.dns-3815.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3815.svc.cluster.local] Apr 7 23:46:42.374: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:42.378: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:42.380: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:42.383: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:42.392: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:42.395: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:42.398: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:42.401: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3815.svc.cluster.local from pod dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433: the server could not find the requested resource (get pods dns-test-ab26972c-c5d5-4938-9803-3736882dd433) Apr 7 23:46:42.407: INFO: Lookups using dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3815.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3815.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3815.svc.cluster.local jessie_udp@dns-test-service-2.dns-3815.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3815.svc.cluster.local] Apr 7 23:46:47.414: INFO: DNS probes using dns-3815/dns-test-ab26972c-c5d5-4938-9803-3736882dd433 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:46:47.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3815" for this suite. • [SLOW TEST:36.561 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":44,"skipped":812,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:46:47.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:46:52.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3844" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":45,"skipped":828,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:46:52.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 7 23:46:52.224: INFO: Waiting up to 5m0s for pod "pod-77589588-cd35-46f7-8f2b-ef1b73995e12" in namespace "emptydir-1861" to be "Succeeded or Failed" Apr 7 23:46:52.234: INFO: Pod "pod-77589588-cd35-46f7-8f2b-ef1b73995e12": Phase="Pending", Reason="", readiness=false. Elapsed: 10.639188ms Apr 7 23:46:54.245: INFO: Pod "pod-77589588-cd35-46f7-8f2b-ef1b73995e12": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02085539s Apr 7 23:46:56.249: INFO: Pod "pod-77589588-cd35-46f7-8f2b-ef1b73995e12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025011616s STEP: Saw pod success Apr 7 23:46:56.249: INFO: Pod "pod-77589588-cd35-46f7-8f2b-ef1b73995e12" satisfied condition "Succeeded or Failed" Apr 7 23:46:56.251: INFO: Trying to get logs from node latest-worker pod pod-77589588-cd35-46f7-8f2b-ef1b73995e12 container test-container: STEP: delete the pod Apr 7 23:46:56.312: INFO: Waiting for pod pod-77589588-cd35-46f7-8f2b-ef1b73995e12 to disappear Apr 7 23:46:56.318: INFO: Pod pod-77589588-cd35-46f7-8f2b-ef1b73995e12 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:46:56.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1861" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":46,"skipped":862,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:46:56.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 7 23:46:56.399: INFO: Waiting up to 5m0s for pod "pod-6f2772ee-7d52-4821-82b2-1a013f30b1d2" in namespace "emptydir-7290" to be "Succeeded or Failed" Apr 7 23:46:56.402: INFO: Pod "pod-6f2772ee-7d52-4821-82b2-1a013f30b1d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.769446ms Apr 7 23:46:58.406: INFO: Pod "pod-6f2772ee-7d52-4821-82b2-1a013f30b1d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006752192s Apr 7 23:47:00.410: INFO: Pod "pod-6f2772ee-7d52-4821-82b2-1a013f30b1d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01039622s STEP: Saw pod success Apr 7 23:47:00.410: INFO: Pod "pod-6f2772ee-7d52-4821-82b2-1a013f30b1d2" satisfied condition "Succeeded or Failed" Apr 7 23:47:00.412: INFO: Trying to get logs from node latest-worker pod pod-6f2772ee-7d52-4821-82b2-1a013f30b1d2 container test-container: STEP: delete the pod Apr 7 23:47:00.481: INFO: Waiting for pod pod-6f2772ee-7d52-4821-82b2-1a013f30b1d2 to disappear Apr 7 23:47:00.495: INFO: Pod pod-6f2772ee-7d52-4821-82b2-1a013f30b1d2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:47:00.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7290" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":47,"skipped":870,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:47:00.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:47:05.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4166" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":48,"skipped":904,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:47:05.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 7 23:47:05.822: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 7 23:47:07.832: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900025, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900025, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900025, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900025, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 7 23:47:10.859: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:47:21.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4899" for this suite. STEP: Destroying namespace "webhook-4899-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.940 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":49,"skipped":910,"failed":0} SSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:47:21.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 7 23:47:21.195: INFO: Waiting up to 5m0s for pod "downward-api-dd2d8810-1860-4c5a-a6a3-76912985ac88" in namespace "downward-api-1828" to be "Succeeded or Failed" Apr 7 23:47:21.206: INFO: Pod "downward-api-dd2d8810-1860-4c5a-a6a3-76912985ac88": Phase="Pending", Reason="", readiness=false. Elapsed: 11.416839ms Apr 7 23:47:23.211: INFO: Pod "downward-api-dd2d8810-1860-4c5a-a6a3-76912985ac88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015922399s Apr 7 23:47:25.214: INFO: Pod "downward-api-dd2d8810-1860-4c5a-a6a3-76912985ac88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019537328s STEP: Saw pod success Apr 7 23:47:25.214: INFO: Pod "downward-api-dd2d8810-1860-4c5a-a6a3-76912985ac88" satisfied condition "Succeeded or Failed" Apr 7 23:47:25.217: INFO: Trying to get logs from node latest-worker2 pod downward-api-dd2d8810-1860-4c5a-a6a3-76912985ac88 container dapi-container: STEP: delete the pod Apr 7 23:47:25.238: INFO: Waiting for pod downward-api-dd2d8810-1860-4c5a-a6a3-76912985ac88 to disappear Apr 7 23:47:25.243: INFO: Pod downward-api-dd2d8810-1860-4c5a-a6a3-76912985ac88 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:47:25.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1828" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":50,"skipped":918,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:47:25.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-e99bd081-eb15-4a20-9976-1039dca0ab9c STEP: Creating configMap with name cm-test-opt-upd-46a32f91-4b82-494d-8e41-9441557222dd STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-e99bd081-eb15-4a20-9976-1039dca0ab9c STEP: Updating configmap cm-test-opt-upd-46a32f91-4b82-494d-8e41-9441557222dd STEP: Creating configMap with name cm-test-opt-create-1ccbaf33-8e3a-4fc8-8b69-215782b4e368 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:47:35.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5051" for this suite. • [SLOW TEST:10.208 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":51,"skipped":937,"failed":0} SSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:47:35.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service multi-endpoint-test in namespace services-8971 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8971 to expose endpoints map[] Apr 7 23:47:35.598: INFO: Get endpoints failed (64.87613ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Apr 7 23:47:36.602: INFO: successfully validated that service multi-endpoint-test in namespace services-8971 exposes endpoints map[] (1.068778588s elapsed) STEP: Creating pod pod1 in namespace services-8971 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8971 to expose endpoints map[pod1:[100]] Apr 7 23:47:39.722: INFO: successfully validated that service multi-endpoint-test in namespace services-8971 exposes endpoints map[pod1:[100]] (3.113008858s elapsed) STEP: Creating pod pod2 in namespace services-8971 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8971 to expose endpoints map[pod1:[100] pod2:[101]] Apr 7 23:47:42.959: INFO: successfully validated that service multi-endpoint-test in namespace services-8971 exposes endpoints map[pod1:[100] pod2:[101]] (3.232904438s elapsed) STEP: Deleting pod pod1 in namespace services-8971 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8971 to expose endpoints map[pod2:[101]] Apr 7 23:47:44.034: INFO: successfully validated that service multi-endpoint-test in namespace services-8971 exposes endpoints map[pod2:[101]] (1.070764468s elapsed) STEP: Deleting pod pod2 in namespace services-8971 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8971 to expose endpoints map[] Apr 7 23:47:45.055: INFO: successfully validated that service multi-endpoint-test in namespace services-8971 exposes endpoints map[] (1.015697838s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:47:45.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8971" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:9.706 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":275,"completed":52,"skipped":941,"failed":0} SSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:47:45.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:47:45.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5103" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":53,"skipped":946,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:47:45.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating cluster-info Apr 7 23:47:45.684: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config cluster-info' Apr 7 23:47:45.777: INFO: stderr: "" Apr 7 23:47:45.777: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:47:45.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9247" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":275,"completed":54,"skipped":953,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:47:45.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 7 23:47:45.847: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b4090ef5-d42b-4f6a-ad12-ab2687519dff" in namespace "projected-5016" to be "Succeeded or Failed" Apr 7 23:47:45.958: INFO: Pod "downwardapi-volume-b4090ef5-d42b-4f6a-ad12-ab2687519dff": Phase="Pending", Reason="", readiness=false. Elapsed: 111.117632ms Apr 7 23:47:47.976: INFO: Pod "downwardapi-volume-b4090ef5-d42b-4f6a-ad12-ab2687519dff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128752013s Apr 7 23:47:49.979: INFO: Pod "downwardapi-volume-b4090ef5-d42b-4f6a-ad12-ab2687519dff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.131931098s STEP: Saw pod success Apr 7 23:47:49.979: INFO: Pod "downwardapi-volume-b4090ef5-d42b-4f6a-ad12-ab2687519dff" satisfied condition "Succeeded or Failed" Apr 7 23:47:49.982: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-b4090ef5-d42b-4f6a-ad12-ab2687519dff container client-container: STEP: delete the pod Apr 7 23:47:50.008: INFO: Waiting for pod downwardapi-volume-b4090ef5-d42b-4f6a-ad12-ab2687519dff to disappear Apr 7 23:47:50.035: INFO: Pod downwardapi-volume-b4090ef5-d42b-4f6a-ad12-ab2687519dff no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:47:50.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5016" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":55,"skipped":988,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:47:50.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-ff64affe-f062-4cba-8653-adfaf1616531 STEP: Creating a pod to test consume configMaps Apr 7 23:47:50.282: INFO: Waiting up to 5m0s for pod "pod-configmaps-61997772-f2d7-48fa-877d-a55a87d951f0" in namespace "configmap-8635" to be "Succeeded or Failed" Apr 7 23:47:50.284: INFO: Pod "pod-configmaps-61997772-f2d7-48fa-877d-a55a87d951f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.537397ms Apr 7 23:47:52.341: INFO: Pod "pod-configmaps-61997772-f2d7-48fa-877d-a55a87d951f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059449028s Apr 7 23:47:54.345: INFO: Pod "pod-configmaps-61997772-f2d7-48fa-877d-a55a87d951f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063466211s STEP: Saw pod success Apr 7 23:47:54.345: INFO: Pod "pod-configmaps-61997772-f2d7-48fa-877d-a55a87d951f0" satisfied condition "Succeeded or Failed" Apr 7 23:47:54.348: INFO: Trying to get logs from node latest-worker pod pod-configmaps-61997772-f2d7-48fa-877d-a55a87d951f0 container configmap-volume-test: STEP: delete the pod Apr 7 23:47:54.408: INFO: Waiting for pod pod-configmaps-61997772-f2d7-48fa-877d-a55a87d951f0 to disappear Apr 7 23:47:54.411: INFO: Pod pod-configmaps-61997772-f2d7-48fa-877d-a55a87d951f0 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:47:54.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8635" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":56,"skipped":1047,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:47:54.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 7 23:47:54.916: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 7 23:47:56.924: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900074, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900074, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900075, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900074, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 7 23:47:59.953: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:48:00.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7475" for this suite. STEP: Destroying namespace "webhook-7475-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.690 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":57,"skipped":1055,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:48:00.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 7 23:48:00.884: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 7 23:48:02.902: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900080, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900080, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900081, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900080, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 7 23:48:05.964: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:48:06.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7804" for this suite. STEP: Destroying namespace "webhook-7804-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.032 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":58,"skipped":1073,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:48:06.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override command Apr 7 23:48:06.218: INFO: Waiting up to 5m0s for pod "client-containers-da1eefea-e269-48da-8ed7-c09dfa71c40c" in namespace "containers-2298" to be "Succeeded or Failed" Apr 7 23:48:06.226: INFO: Pod "client-containers-da1eefea-e269-48da-8ed7-c09dfa71c40c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.717388ms Apr 7 23:48:08.229: INFO: Pod "client-containers-da1eefea-e269-48da-8ed7-c09dfa71c40c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011174694s Apr 7 23:48:10.252: INFO: Pod "client-containers-da1eefea-e269-48da-8ed7-c09dfa71c40c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033533853s STEP: Saw pod success Apr 7 23:48:10.252: INFO: Pod "client-containers-da1eefea-e269-48da-8ed7-c09dfa71c40c" satisfied condition "Succeeded or Failed" Apr 7 23:48:10.255: INFO: Trying to get logs from node latest-worker pod client-containers-da1eefea-e269-48da-8ed7-c09dfa71c40c container test-container: STEP: delete the pod Apr 7 23:48:10.348: INFO: Waiting for pod client-containers-da1eefea-e269-48da-8ed7-c09dfa71c40c to disappear Apr 7 23:48:10.365: INFO: Pod client-containers-da1eefea-e269-48da-8ed7-c09dfa71c40c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:48:10.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2298" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":59,"skipped":1143,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:48:10.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 7 23:48:10.437: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:48:11.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3521" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":275,"completed":60,"skipped":1146,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:48:11.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-2758 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-2758 I0407 23:48:11.754219 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-2758, replica count: 2 I0407 23:48:14.804789 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0407 23:48:17.805018 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 7 23:48:17.805: INFO: Creating new exec pod Apr 7 23:48:22.823: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-2758 execpod4rqmz -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 7 23:48:25.240: INFO: stderr: "I0407 23:48:25.159480 540 log.go:172] (0xc000ae4b00) (0xc0007d1680) Create stream\nI0407 23:48:25.159521 540 log.go:172] (0xc000ae4b00) (0xc0007d1680) Stream added, broadcasting: 1\nI0407 23:48:25.163240 540 log.go:172] (0xc000ae4b00) Reply frame received for 1\nI0407 23:48:25.163271 540 log.go:172] (0xc000ae4b00) (0xc0006495e0) Create stream\nI0407 23:48:25.163278 540 log.go:172] (0xc000ae4b00) (0xc0006495e0) Stream added, broadcasting: 3\nI0407 23:48:25.164190 540 log.go:172] (0xc000ae4b00) Reply frame received for 3\nI0407 23:48:25.164236 540 log.go:172] (0xc000ae4b00) (0xc0007d1720) Create stream\nI0407 23:48:25.164249 540 log.go:172] (0xc000ae4b00) (0xc0007d1720) Stream added, broadcasting: 5\nI0407 23:48:25.165086 540 log.go:172] (0xc000ae4b00) Reply frame received for 5\nI0407 23:48:25.231348 540 log.go:172] (0xc000ae4b00) Data frame received for 5\nI0407 23:48:25.231399 540 log.go:172] (0xc0007d1720) (5) Data frame handling\nI0407 23:48:25.231422 540 log.go:172] (0xc0007d1720) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0407 23:48:25.231784 540 log.go:172] (0xc000ae4b00) Data frame received for 5\nI0407 23:48:25.231821 540 log.go:172] (0xc0007d1720) (5) Data frame handling\nI0407 23:48:25.231860 540 log.go:172] (0xc0007d1720) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0407 23:48:25.232254 540 log.go:172] (0xc000ae4b00) Data frame received for 5\nI0407 23:48:25.232290 540 log.go:172] (0xc0007d1720) (5) Data frame handling\nI0407 23:48:25.232337 540 log.go:172] (0xc000ae4b00) Data frame received for 3\nI0407 23:48:25.232361 540 log.go:172] (0xc0006495e0) (3) Data frame handling\nI0407 23:48:25.234589 540 log.go:172] (0xc000ae4b00) Data frame received for 1\nI0407 23:48:25.234611 540 log.go:172] (0xc0007d1680) (1) Data frame handling\nI0407 23:48:25.234626 540 log.go:172] (0xc0007d1680) (1) Data frame sent\nI0407 23:48:25.234642 540 log.go:172] (0xc000ae4b00) (0xc0007d1680) Stream removed, broadcasting: 1\nI0407 23:48:25.234676 540 log.go:172] (0xc000ae4b00) Go away received\nI0407 23:48:25.235174 540 log.go:172] (0xc000ae4b00) (0xc0007d1680) Stream removed, broadcasting: 1\nI0407 23:48:25.235206 540 log.go:172] (0xc000ae4b00) (0xc0006495e0) Stream removed, broadcasting: 3\nI0407 23:48:25.235226 540 log.go:172] (0xc000ae4b00) (0xc0007d1720) Stream removed, broadcasting: 5\n" Apr 7 23:48:25.241: INFO: stdout: "" Apr 7 23:48:25.242: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-2758 execpod4rqmz -- /bin/sh -x -c nc -zv -t -w 2 10.96.159.108 80' Apr 7 23:48:25.431: INFO: stderr: "I0407 23:48:25.362412 576 log.go:172] (0xc000a90000) (0xc0007f1220) Create stream\nI0407 23:48:25.362514 576 log.go:172] (0xc000a90000) (0xc0007f1220) Stream added, broadcasting: 1\nI0407 23:48:25.365396 576 log.go:172] (0xc000a90000) Reply frame received for 1\nI0407 23:48:25.365459 576 log.go:172] (0xc000a90000) (0xc0009b4000) Create stream\nI0407 23:48:25.365488 576 log.go:172] (0xc000a90000) (0xc0009b4000) Stream added, broadcasting: 3\nI0407 23:48:25.366264 576 log.go:172] (0xc000a90000) Reply frame received for 3\nI0407 23:48:25.366305 576 log.go:172] (0xc000a90000) (0xc000664000) Create stream\nI0407 23:48:25.366326 576 log.go:172] (0xc000a90000) (0xc000664000) Stream added, broadcasting: 5\nI0407 23:48:25.367068 576 log.go:172] (0xc000a90000) Reply frame received for 5\nI0407 23:48:25.425644 576 log.go:172] (0xc000a90000) Data frame received for 5\nI0407 23:48:25.425671 576 log.go:172] (0xc000664000) (5) Data frame handling\nI0407 23:48:25.425683 576 log.go:172] (0xc000664000) (5) Data frame sent\nI0407 23:48:25.425690 576 log.go:172] (0xc000a90000) Data frame received for 5\nI0407 23:48:25.425700 576 log.go:172] (0xc000664000) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.159.108 80\nConnection to 10.96.159.108 80 port [tcp/http] succeeded!\nI0407 23:48:25.425747 576 log.go:172] (0xc000a90000) Data frame received for 3\nI0407 23:48:25.425795 576 log.go:172] (0xc0009b4000) (3) Data frame handling\nI0407 23:48:25.427119 576 log.go:172] (0xc000a90000) Data frame received for 1\nI0407 23:48:25.427146 576 log.go:172] (0xc0007f1220) (1) Data frame handling\nI0407 23:48:25.427160 576 log.go:172] (0xc0007f1220) (1) Data frame sent\nI0407 23:48:25.427170 576 log.go:172] (0xc000a90000) (0xc0007f1220) Stream removed, broadcasting: 1\nI0407 23:48:25.427360 576 log.go:172] (0xc000a90000) Go away received\nI0407 23:48:25.427459 576 log.go:172] (0xc000a90000) (0xc0007f1220) Stream removed, broadcasting: 1\nI0407 23:48:25.427474 576 log.go:172] (0xc000a90000) (0xc0009b4000) Stream removed, broadcasting: 3\nI0407 23:48:25.427482 576 log.go:172] (0xc000a90000) (0xc000664000) Stream removed, broadcasting: 5\n" Apr 7 23:48:25.431: INFO: stdout: "" Apr 7 23:48:25.431: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:48:25.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2758" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:13.855 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":61,"skipped":1167,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:48:25.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-93031c1f-9bee-458c-a8b9-843727ff0061 STEP: Creating a pod to test consume configMaps Apr 7 23:48:25.552: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-df57a4e6-8418-4575-ba43-167801834fb8" in namespace "projected-2203" to be "Succeeded or Failed" Apr 7 23:48:25.568: INFO: Pod "pod-projected-configmaps-df57a4e6-8418-4575-ba43-167801834fb8": Phase="Pending", Reason="", readiness=false. Elapsed: 16.162885ms Apr 7 23:48:27.571: INFO: Pod "pod-projected-configmaps-df57a4e6-8418-4575-ba43-167801834fb8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019551457s Apr 7 23:48:29.575: INFO: Pod "pod-projected-configmaps-df57a4e6-8418-4575-ba43-167801834fb8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02356332s STEP: Saw pod success Apr 7 23:48:29.575: INFO: Pod "pod-projected-configmaps-df57a4e6-8418-4575-ba43-167801834fb8" satisfied condition "Succeeded or Failed" Apr 7 23:48:29.579: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-df57a4e6-8418-4575-ba43-167801834fb8 container projected-configmap-volume-test: STEP: delete the pod Apr 7 23:48:29.618: INFO: Waiting for pod pod-projected-configmaps-df57a4e6-8418-4575-ba43-167801834fb8 to disappear Apr 7 23:48:29.627: INFO: Pod pod-projected-configmaps-df57a4e6-8418-4575-ba43-167801834fb8 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:48:29.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2203" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":62,"skipped":1188,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:48:29.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 7 23:48:29.693: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 7 23:48:34.696: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 7 23:48:34.697: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 7 23:48:36.701: INFO: Creating deployment "test-rollover-deployment" Apr 7 23:48:36.719: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 7 23:48:38.726: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 7 23:48:38.733: INFO: Ensure that both replica sets have 1 created replica Apr 7 23:48:38.739: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 7 23:48:38.744: INFO: Updating deployment test-rollover-deployment Apr 7 23:48:38.744: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 7 23:48:40.758: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 7 23:48:40.764: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 7 23:48:40.768: INFO: all replica sets need to contain the pod-template-hash label Apr 7 23:48:40.769: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900116, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900116, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900118, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900116, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 7 23:48:42.777: INFO: all replica sets need to contain the pod-template-hash label Apr 7 23:48:42.778: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900116, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900116, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900122, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900116, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 7 23:48:44.777: INFO: all replica sets need to contain the pod-template-hash label Apr 7 23:48:44.777: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900116, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900116, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900122, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900116, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 7 23:48:46.777: INFO: all replica sets need to contain the pod-template-hash label Apr 7 23:48:46.777: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900116, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900116, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900122, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900116, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 7 23:48:48.776: INFO: all replica sets need to contain the pod-template-hash label Apr 7 23:48:48.776: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900116, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900116, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900122, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900116, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 7 23:48:50.777: INFO: all replica sets need to contain the pod-template-hash label Apr 7 23:48:50.777: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900116, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900116, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900122, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900116, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 7 23:48:52.777: INFO: Apr 7 23:48:52.777: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 7 23:48:52.785: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-3833 /apis/apps/v1/namespaces/deployment-3833/deployments/test-rollover-deployment 4d17ea3f-974f-44dd-bdb1-ee33a0283f73 6269183 2 2020-04-07 23:48:36 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002f9a1a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-07 23:48:36 +0000 UTC,LastTransitionTime:2020-04-07 23:48:36 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-78df7bc796" has successfully progressed.,LastUpdateTime:2020-04-07 23:48:52 +0000 UTC,LastTransitionTime:2020-04-07 23:48:36 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 7 23:48:52.788: INFO: New ReplicaSet "test-rollover-deployment-78df7bc796" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-78df7bc796 deployment-3833 /apis/apps/v1/namespaces/deployment-3833/replicasets/test-rollover-deployment-78df7bc796 d6116204-dc05-439d-83ac-e62e4b2b3ad9 6269172 2 2020-04-07 23:48:38 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 4d17ea3f-974f-44dd-bdb1-ee33a0283f73 0xc0054cee97 0xc0054cee98}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78df7bc796,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0054cef08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 7 23:48:52.788: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 7 23:48:52.788: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-3833 /apis/apps/v1/namespaces/deployment-3833/replicasets/test-rollover-controller 0686ceef-9515-4e40-84a9-fee965a365b1 6269182 2 2020-04-07 23:48:29 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 4d17ea3f-974f-44dd-bdb1-ee33a0283f73 0xc0054cedc7 0xc0054cedc8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0054cee28 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 7 23:48:52.788: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-3833 /apis/apps/v1/namespaces/deployment-3833/replicasets/test-rollover-deployment-f6c94f66c bdb3557c-96b8-4976-b3e1-63f9f4ea5686 6269123 2 2020-04-07 23:48:36 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 4d17ea3f-974f-44dd-bdb1-ee33a0283f73 0xc0054cef70 0xc0054cef71}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0054cefe8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 7 23:48:52.791: INFO: Pod "test-rollover-deployment-78df7bc796-gtjc2" is available: &Pod{ObjectMeta:{test-rollover-deployment-78df7bc796-gtjc2 test-rollover-deployment-78df7bc796- deployment-3833 /api/v1/namespaces/deployment-3833/pods/test-rollover-deployment-78df7bc796-gtjc2 ddd6d39c-3222-4981-873c-5b85595f7aeb 6269140 0 2020-04-07 23:48:38 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [{apps/v1 ReplicaSet test-rollover-deployment-78df7bc796 d6116204-dc05-439d-83ac-e62e4b2b3ad9 0xc0054cf5a7 0xc0054cf5a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h98zh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h98zh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h98zh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:48:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:48:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:48:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-07 23:48:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.103,StartTime:2020-04-07 23:48:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-07 23:48:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://4d2a50f2586ceceb9056851868af65e33b59f77d701ad6e18f05a590b89d2be2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.103,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:48:52.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3833" for this suite. • [SLOW TEST:23.165 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":63,"skipped":1209,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:48:52.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:49:08.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5433" for this suite. • [SLOW TEST:16.157 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":64,"skipped":1210,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:49:08.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 7 23:49:09.020: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8e7621ee-d870-42b6-b47a-67dcd67fafc6" in namespace "projected-282" to be "Succeeded or Failed" Apr 7 23:49:09.023: INFO: Pod "downwardapi-volume-8e7621ee-d870-42b6-b47a-67dcd67fafc6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.656506ms Apr 7 23:49:11.028: INFO: Pod "downwardapi-volume-8e7621ee-d870-42b6-b47a-67dcd67fafc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007856836s Apr 7 23:49:13.032: INFO: Pod "downwardapi-volume-8e7621ee-d870-42b6-b47a-67dcd67fafc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01198808s STEP: Saw pod success Apr 7 23:49:13.032: INFO: Pod "downwardapi-volume-8e7621ee-d870-42b6-b47a-67dcd67fafc6" satisfied condition "Succeeded or Failed" Apr 7 23:49:13.035: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-8e7621ee-d870-42b6-b47a-67dcd67fafc6 container client-container: STEP: delete the pod Apr 7 23:49:13.082: INFO: Waiting for pod downwardapi-volume-8e7621ee-d870-42b6-b47a-67dcd67fafc6 to disappear Apr 7 23:49:13.096: INFO: Pod downwardapi-volume-8e7621ee-d870-42b6-b47a-67dcd67fafc6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:49:13.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-282" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":65,"skipped":1215,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:49:13.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:49:29.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2180" for this suite. • [SLOW TEST:16.273 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":66,"skipped":1320,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:49:29.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 7 23:49:32.557: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:49:32.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2244" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":67,"skipped":1326,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:49:32.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 7 23:49:32.716: INFO: Waiting up to 5m0s for pod "downwardapi-volume-827b1076-020d-4880-939f-81acb8f44d12" in namespace "downward-api-7043" to be "Succeeded or Failed" Apr 7 23:49:32.786: INFO: Pod "downwardapi-volume-827b1076-020d-4880-939f-81acb8f44d12": Phase="Pending", Reason="", readiness=false. Elapsed: 70.037952ms Apr 7 23:49:34.809: INFO: Pod "downwardapi-volume-827b1076-020d-4880-939f-81acb8f44d12": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093090293s Apr 7 23:49:36.813: INFO: Pod "downwardapi-volume-827b1076-020d-4880-939f-81acb8f44d12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.097028935s STEP: Saw pod success Apr 7 23:49:36.813: INFO: Pod "downwardapi-volume-827b1076-020d-4880-939f-81acb8f44d12" satisfied condition "Succeeded or Failed" Apr 7 23:49:36.816: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-827b1076-020d-4880-939f-81acb8f44d12 container client-container: STEP: delete the pod Apr 7 23:49:36.840: INFO: Waiting for pod downwardapi-volume-827b1076-020d-4880-939f-81acb8f44d12 to disappear Apr 7 23:49:36.899: INFO: Pod downwardapi-volume-827b1076-020d-4880-939f-81acb8f44d12 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:49:36.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7043" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":68,"skipped":1329,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:49:36.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 7 23:49:36.931: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-2845' Apr 7 23:49:37.177: INFO: stderr: "" Apr 7 23:49:37.177: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423 Apr 7 23:49:37.180: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-2845' Apr 7 23:49:39.947: INFO: stderr: "" Apr 7 23:49:39.947: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:49:39.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2845" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":275,"completed":69,"skipped":1351,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:49:39.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 7 23:49:40.002: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:49:46.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7685" for this suite. • [SLOW TEST:7.012 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":70,"skipped":1403,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:49:46.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Apr 7 23:49:46.995: INFO: >>> kubeConfig: /root/.kube/config Apr 7 23:49:49.930: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:49:59.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5598" for this suite. • [SLOW TEST:12.502 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":71,"skipped":1430,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:49:59.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 7 23:50:00.031: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 7 23:50:02.042: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900200, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900200, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900200, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900200, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 7 23:50:05.056: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:50:05.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4013" for this suite. STEP: Destroying namespace "webhook-4013-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.826 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":72,"skipped":1435,"failed":0} SSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:50:05.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token Apr 7 23:50:05.868: INFO: created pod pod-service-account-defaultsa Apr 7 23:50:05.868: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 7 23:50:05.875: INFO: created pod pod-service-account-mountsa Apr 7 23:50:05.875: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 7 23:50:05.882: INFO: created pod pod-service-account-nomountsa Apr 7 23:50:05.882: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 7 23:50:05.906: INFO: created pod pod-service-account-defaultsa-mountspec Apr 7 23:50:05.906: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 7 23:50:05.966: INFO: created pod pod-service-account-mountsa-mountspec Apr 7 23:50:05.966: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 7 23:50:05.972: INFO: created pod pod-service-account-nomountsa-mountspec Apr 7 23:50:05.972: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 7 23:50:05.997: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 7 23:50:05.997: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 7 23:50:06.027: INFO: created pod pod-service-account-mountsa-nomountspec Apr 7 23:50:06.027: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 7 23:50:06.054: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 7 23:50:06.054: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:50:06.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-495" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":275,"completed":73,"skipped":1439,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:50:06.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-25644b01-1b6d-4d7c-ba90-9b5f879fdd8f STEP: Creating a pod to test consume configMaps Apr 7 23:50:06.280: INFO: Waiting up to 5m0s for pod "pod-configmaps-a7fb6c79-cee2-429a-ad56-620195ce97cd" in namespace "configmap-8248" to be "Succeeded or Failed" Apr 7 23:50:06.285: INFO: Pod "pod-configmaps-a7fb6c79-cee2-429a-ad56-620195ce97cd": Phase="Pending", Reason="", readiness=false. Elapsed: 5.185141ms Apr 7 23:50:08.289: INFO: Pod "pod-configmaps-a7fb6c79-cee2-429a-ad56-620195ce97cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009333556s Apr 7 23:50:10.924: INFO: Pod "pod-configmaps-a7fb6c79-cee2-429a-ad56-620195ce97cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.64462646s Apr 7 23:50:12.928: INFO: Pod "pod-configmaps-a7fb6c79-cee2-429a-ad56-620195ce97cd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.648625406s Apr 7 23:50:15.068: INFO: Pod "pod-configmaps-a7fb6c79-cee2-429a-ad56-620195ce97cd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.787888205s Apr 7 23:50:17.108: INFO: Pod "pod-configmaps-a7fb6c79-cee2-429a-ad56-620195ce97cd": Phase="Running", Reason="", readiness=true. Elapsed: 10.8277298s Apr 7 23:50:19.112: INFO: Pod "pod-configmaps-a7fb6c79-cee2-429a-ad56-620195ce97cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.83176404s STEP: Saw pod success Apr 7 23:50:19.112: INFO: Pod "pod-configmaps-a7fb6c79-cee2-429a-ad56-620195ce97cd" satisfied condition "Succeeded or Failed" Apr 7 23:50:19.114: INFO: Trying to get logs from node latest-worker pod pod-configmaps-a7fb6c79-cee2-429a-ad56-620195ce97cd container configmap-volume-test: STEP: delete the pod Apr 7 23:50:19.147: INFO: Waiting for pod pod-configmaps-a7fb6c79-cee2-429a-ad56-620195ce97cd to disappear Apr 7 23:50:19.164: INFO: Pod pod-configmaps-a7fb6c79-cee2-429a-ad56-620195ce97cd no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:50:19.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8248" for this suite. • [SLOW TEST:12.996 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":74,"skipped":1445,"failed":0} [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:50:19.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 7 23:50:19.221: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 7 23:50:19.240: INFO: Waiting for terminating namespaces to be deleted... Apr 7 23:50:19.243: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 7 23:50:19.249: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 7 23:50:19.249: INFO: Container kindnet-cni ready: true, restart count 0 Apr 7 23:50:19.249: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 7 23:50:19.249: INFO: Container kube-proxy ready: true, restart count 0 Apr 7 23:50:19.249: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 7 23:50:19.254: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 7 23:50:19.254: INFO: Container kindnet-cni ready: true, restart count 0 Apr 7 23:50:19.254: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 7 23:50:19.254: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-60315a1b-f794-4894-9910-3eefff95e1bb 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-60315a1b-f794-4894-9910-3eefff95e1bb off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-60315a1b-f794-4894-9910-3eefff95e1bb [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:50:35.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1483" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:16.340 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":75,"skipped":1445,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:50:35.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Apr 7 23:50:35.606: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Apr 7 23:50:46.205: INFO: >>> kubeConfig: /root/.kube/config Apr 7 23:50:49.108: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:50:59.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5222" for this suite. • [SLOW TEST:24.120 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":76,"skipped":1462,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:50:59.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 7 23:50:59.743: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config version' Apr 7 23:50:59.888: INFO: stderr: "" Apr 7 23:50:59.888: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.0.779+84dc7046797aad\", GitCommit:\"84dc7046797aad80f258b6740a98e79199c8bb4d\", GitTreeState:\"clean\", BuildDate:\"2020-03-15T16:56:42Z\", GoVersion:\"go1.13.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:09:19Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:50:59.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5899" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":275,"completed":77,"skipped":1488,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:50:59.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-709d8273-626f-4910-af0f-fa30b6dd3767 STEP: Creating a pod to test consume configMaps Apr 7 23:51:00.004: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-976eb916-e68e-4e1c-a131-afa3c25a77a1" in namespace "projected-1138" to be "Succeeded or Failed" Apr 7 23:51:00.014: INFO: Pod "pod-projected-configmaps-976eb916-e68e-4e1c-a131-afa3c25a77a1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.624013ms Apr 7 23:51:02.051: INFO: Pod "pod-projected-configmaps-976eb916-e68e-4e1c-a131-afa3c25a77a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047206843s Apr 7 23:51:04.128: INFO: Pod "pod-projected-configmaps-976eb916-e68e-4e1c-a131-afa3c25a77a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.124020694s STEP: Saw pod success Apr 7 23:51:04.128: INFO: Pod "pod-projected-configmaps-976eb916-e68e-4e1c-a131-afa3c25a77a1" satisfied condition "Succeeded or Failed" Apr 7 23:51:04.130: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-976eb916-e68e-4e1c-a131-afa3c25a77a1 container projected-configmap-volume-test: STEP: delete the pod Apr 7 23:51:04.161: INFO: Waiting for pod pod-projected-configmaps-976eb916-e68e-4e1c-a131-afa3c25a77a1 to disappear Apr 7 23:51:04.182: INFO: Pod pod-projected-configmaps-976eb916-e68e-4e1c-a131-afa3c25a77a1 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:51:04.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1138" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":78,"skipped":1506,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:51:04.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 7 23:51:12.329: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 7 23:51:12.349: INFO: Pod pod-with-poststart-http-hook still exists Apr 7 23:51:14.349: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 7 23:51:14.353: INFO: Pod pod-with-poststart-http-hook still exists Apr 7 23:51:16.349: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 7 23:51:16.353: INFO: Pod pod-with-poststart-http-hook still exists Apr 7 23:51:18.349: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 7 23:51:18.354: INFO: Pod pod-with-poststart-http-hook still exists Apr 7 23:51:20.349: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 7 23:51:20.353: INFO: Pod pod-with-poststart-http-hook still exists Apr 7 23:51:22.349: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 7 23:51:22.361: INFO: Pod pod-with-poststart-http-hook still exists Apr 7 23:51:24.349: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 7 23:51:24.353: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:51:24.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3914" for this suite. • [SLOW TEST:20.172 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":79,"skipped":1522,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:51:24.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 7 23:51:24.473: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-569119eb-14a2-48c8-a2df-0ca8179e5f86" in namespace "security-context-test-5704" to be "Succeeded or Failed" Apr 7 23:51:24.485: INFO: Pod "busybox-privileged-false-569119eb-14a2-48c8-a2df-0ca8179e5f86": Phase="Pending", Reason="", readiness=false. Elapsed: 12.259601ms Apr 7 23:51:26.489: INFO: Pod "busybox-privileged-false-569119eb-14a2-48c8-a2df-0ca8179e5f86": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01539063s Apr 7 23:51:28.499: INFO: Pod "busybox-privileged-false-569119eb-14a2-48c8-a2df-0ca8179e5f86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026110773s Apr 7 23:51:28.499: INFO: Pod "busybox-privileged-false-569119eb-14a2-48c8-a2df-0ca8179e5f86" satisfied condition "Succeeded or Failed" Apr 7 23:51:28.508: INFO: Got logs for pod "busybox-privileged-false-569119eb-14a2-48c8-a2df-0ca8179e5f86": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:51:28.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5704" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":80,"skipped":1530,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:51:28.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 7 23:51:28.595: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0dfc2cba-d41f-460f-96f4-c2572ac47efe" in namespace "projected-5385" to be "Succeeded or Failed" Apr 7 23:51:28.619: INFO: Pod "downwardapi-volume-0dfc2cba-d41f-460f-96f4-c2572ac47efe": Phase="Pending", Reason="", readiness=false. Elapsed: 24.557135ms Apr 7 23:51:30.623: INFO: Pod "downwardapi-volume-0dfc2cba-d41f-460f-96f4-c2572ac47efe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028044042s Apr 7 23:51:32.627: INFO: Pod "downwardapi-volume-0dfc2cba-d41f-460f-96f4-c2572ac47efe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032212819s STEP: Saw pod success Apr 7 23:51:32.627: INFO: Pod "downwardapi-volume-0dfc2cba-d41f-460f-96f4-c2572ac47efe" satisfied condition "Succeeded or Failed" Apr 7 23:51:32.630: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-0dfc2cba-d41f-460f-96f4-c2572ac47efe container client-container: STEP: delete the pod Apr 7 23:51:32.681: INFO: Waiting for pod downwardapi-volume-0dfc2cba-d41f-460f-96f4-c2572ac47efe to disappear Apr 7 23:51:32.686: INFO: Pod downwardapi-volume-0dfc2cba-d41f-460f-96f4-c2572ac47efe no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:51:32.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5385" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":81,"skipped":1612,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:51:32.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-secret-6blj STEP: Creating a pod to test atomic-volume-subpath Apr 7 23:51:32.779: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-6blj" in namespace "subpath-6022" to be "Succeeded or Failed" Apr 7 23:51:32.799: INFO: Pod "pod-subpath-test-secret-6blj": Phase="Pending", Reason="", readiness=false. Elapsed: 20.652716ms Apr 7 23:51:34.803: INFO: Pod "pod-subpath-test-secret-6blj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024768693s Apr 7 23:51:36.807: INFO: Pod "pod-subpath-test-secret-6blj": Phase="Running", Reason="", readiness=true. Elapsed: 4.02852625s Apr 7 23:51:38.811: INFO: Pod "pod-subpath-test-secret-6blj": Phase="Running", Reason="", readiness=true. Elapsed: 6.031899956s Apr 7 23:51:40.814: INFO: Pod "pod-subpath-test-secret-6blj": Phase="Running", Reason="", readiness=true. Elapsed: 8.035759378s Apr 7 23:51:42.819: INFO: Pod "pod-subpath-test-secret-6blj": Phase="Running", Reason="", readiness=true. Elapsed: 10.039955178s Apr 7 23:51:44.835: INFO: Pod "pod-subpath-test-secret-6blj": Phase="Running", Reason="", readiness=true. Elapsed: 12.056475926s Apr 7 23:51:46.838: INFO: Pod "pod-subpath-test-secret-6blj": Phase="Running", Reason="", readiness=true. Elapsed: 14.059521559s Apr 7 23:51:48.842: INFO: Pod "pod-subpath-test-secret-6blj": Phase="Running", Reason="", readiness=true. Elapsed: 16.063749299s Apr 7 23:51:50.847: INFO: Pod "pod-subpath-test-secret-6blj": Phase="Running", Reason="", readiness=true. Elapsed: 18.06814915s Apr 7 23:51:52.851: INFO: Pod "pod-subpath-test-secret-6blj": Phase="Running", Reason="", readiness=true. Elapsed: 20.07208294s Apr 7 23:51:54.855: INFO: Pod "pod-subpath-test-secret-6blj": Phase="Running", Reason="", readiness=true. Elapsed: 22.076411345s Apr 7 23:51:56.859: INFO: Pod "pod-subpath-test-secret-6blj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.08021329s STEP: Saw pod success Apr 7 23:51:56.859: INFO: Pod "pod-subpath-test-secret-6blj" satisfied condition "Succeeded or Failed" Apr 7 23:51:56.861: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-secret-6blj container test-container-subpath-secret-6blj: STEP: delete the pod Apr 7 23:51:56.894: INFO: Waiting for pod pod-subpath-test-secret-6blj to disappear Apr 7 23:51:56.908: INFO: Pod pod-subpath-test-secret-6blj no longer exists STEP: Deleting pod pod-subpath-test-secret-6blj Apr 7 23:51:56.908: INFO: Deleting pod "pod-subpath-test-secret-6blj" in namespace "subpath-6022" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:51:56.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6022" for this suite. • [SLOW TEST:24.232 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":82,"skipped":1691,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:51:56.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 7 23:52:05.044: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 7 23:52:05.093: INFO: Pod pod-with-poststart-exec-hook still exists Apr 7 23:52:07.093: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 7 23:52:07.098: INFO: Pod pod-with-poststart-exec-hook still exists Apr 7 23:52:09.093: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 7 23:52:09.098: INFO: Pod pod-with-poststart-exec-hook still exists Apr 7 23:52:11.093: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 7 23:52:11.135: INFO: Pod pod-with-poststart-exec-hook still exists Apr 7 23:52:13.093: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 7 23:52:13.099: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:52:13.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6825" for this suite. • [SLOW TEST:16.180 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":83,"skipped":1697,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:52:13.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:52:24.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-508" for this suite. • [SLOW TEST:11.252 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":84,"skipped":1722,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:52:24.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:52:51.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2369" for this suite. • [SLOW TEST:26.748 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":85,"skipped":1730,"failed":0} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:52:51.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 7 23:52:51.203: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 7 23:52:51.209: INFO: Number of nodes with available pods: 0 Apr 7 23:52:51.209: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 7 23:52:51.260: INFO: Number of nodes with available pods: 0 Apr 7 23:52:51.260: INFO: Node latest-worker2 is running more than one daemon pod Apr 7 23:52:52.264: INFO: Number of nodes with available pods: 0 Apr 7 23:52:52.264: INFO: Node latest-worker2 is running more than one daemon pod Apr 7 23:52:53.264: INFO: Number of nodes with available pods: 0 Apr 7 23:52:53.264: INFO: Node latest-worker2 is running more than one daemon pod Apr 7 23:52:54.265: INFO: Number of nodes with available pods: 1 Apr 7 23:52:54.265: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 7 23:52:54.296: INFO: Number of nodes with available pods: 1 Apr 7 23:52:54.296: INFO: Number of running nodes: 0, number of available pods: 1 Apr 7 23:52:55.300: INFO: Number of nodes with available pods: 0 Apr 7 23:52:55.300: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 7 23:52:55.311: INFO: Number of nodes with available pods: 0 Apr 7 23:52:55.311: INFO: Node latest-worker2 is running more than one daemon pod Apr 7 23:52:56.357: INFO: Number of nodes with available pods: 0 Apr 7 23:52:56.357: INFO: Node latest-worker2 is running more than one daemon pod Apr 7 23:52:57.315: INFO: Number of nodes with available pods: 0 Apr 7 23:52:57.315: INFO: Node latest-worker2 is running more than one daemon pod Apr 7 23:52:58.315: INFO: Number of nodes with available pods: 0 Apr 7 23:52:58.315: INFO: Node latest-worker2 is running more than one daemon pod Apr 7 23:52:59.316: INFO: Number of nodes with available pods: 0 Apr 7 23:52:59.316: INFO: Node latest-worker2 is running more than one daemon pod Apr 7 23:53:00.315: INFO: Number of nodes with available pods: 0 Apr 7 23:53:00.315: INFO: Node latest-worker2 is running more than one daemon pod Apr 7 23:53:01.315: INFO: Number of nodes with available pods: 0 Apr 7 23:53:01.315: INFO: Node latest-worker2 is running more than one daemon pod Apr 7 23:53:02.315: INFO: Number of nodes with available pods: 0 Apr 7 23:53:02.315: INFO: Node latest-worker2 is running more than one daemon pod Apr 7 23:53:03.315: INFO: Number of nodes with available pods: 0 Apr 7 23:53:03.315: INFO: Node latest-worker2 is running more than one daemon pod Apr 7 23:53:04.314: INFO: Number of nodes with available pods: 0 Apr 7 23:53:04.315: INFO: Node latest-worker2 is running more than one daemon pod Apr 7 23:53:05.315: INFO: Number of nodes with available pods: 0 Apr 7 23:53:05.315: INFO: Node latest-worker2 is running more than one daemon pod Apr 7 23:53:06.315: INFO: Number of nodes with available pods: 1 Apr 7 23:53:06.315: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4364, will wait for the garbage collector to delete the pods Apr 7 23:53:06.381: INFO: Deleting DaemonSet.extensions daemon-set took: 6.532574ms Apr 7 23:53:06.681: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.244546ms Apr 7 23:53:13.084: INFO: Number of nodes with available pods: 0 Apr 7 23:53:13.084: INFO: Number of running nodes: 0, number of available pods: 0 Apr 7 23:53:13.087: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4364/daemonsets","resourceVersion":"6270854"},"items":null} Apr 7 23:53:13.090: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4364/pods","resourceVersion":"6270854"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:53:13.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4364" for this suite. • [SLOW TEST:22.016 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":86,"skipped":1734,"failed":0} SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:53:13.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-vzdm STEP: Creating a pod to test atomic-volume-subpath Apr 7 23:53:13.218: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-vzdm" in namespace "subpath-923" to be "Succeeded or Failed" Apr 7 23:53:13.237: INFO: Pod "pod-subpath-test-configmap-vzdm": Phase="Pending", Reason="", readiness=false. Elapsed: 19.305354ms Apr 7 23:53:15.242: INFO: Pod "pod-subpath-test-configmap-vzdm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024138448s Apr 7 23:53:17.246: INFO: Pod "pod-subpath-test-configmap-vzdm": Phase="Running", Reason="", readiness=true. Elapsed: 4.028159121s Apr 7 23:53:19.250: INFO: Pod "pod-subpath-test-configmap-vzdm": Phase="Running", Reason="", readiness=true. Elapsed: 6.03241426s Apr 7 23:53:21.262: INFO: Pod "pod-subpath-test-configmap-vzdm": Phase="Running", Reason="", readiness=true. Elapsed: 8.043827512s Apr 7 23:53:23.266: INFO: Pod "pod-subpath-test-configmap-vzdm": Phase="Running", Reason="", readiness=true. Elapsed: 10.047936284s Apr 7 23:53:25.270: INFO: Pod "pod-subpath-test-configmap-vzdm": Phase="Running", Reason="", readiness=true. Elapsed: 12.052458201s Apr 7 23:53:27.274: INFO: Pod "pod-subpath-test-configmap-vzdm": Phase="Running", Reason="", readiness=true. Elapsed: 14.056500708s Apr 7 23:53:29.279: INFO: Pod "pod-subpath-test-configmap-vzdm": Phase="Running", Reason="", readiness=true. Elapsed: 16.060651999s Apr 7 23:53:31.283: INFO: Pod "pod-subpath-test-configmap-vzdm": Phase="Running", Reason="", readiness=true. Elapsed: 18.064790523s Apr 7 23:53:33.287: INFO: Pod "pod-subpath-test-configmap-vzdm": Phase="Running", Reason="", readiness=true. Elapsed: 20.068792017s Apr 7 23:53:35.291: INFO: Pod "pod-subpath-test-configmap-vzdm": Phase="Running", Reason="", readiness=true. Elapsed: 22.07308321s Apr 7 23:53:37.295: INFO: Pod "pod-subpath-test-configmap-vzdm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.077396011s STEP: Saw pod success Apr 7 23:53:37.295: INFO: Pod "pod-subpath-test-configmap-vzdm" satisfied condition "Succeeded or Failed" Apr 7 23:53:37.299: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-vzdm container test-container-subpath-configmap-vzdm: STEP: delete the pod Apr 7 23:53:37.354: INFO: Waiting for pod pod-subpath-test-configmap-vzdm to disappear Apr 7 23:53:37.364: INFO: Pod pod-subpath-test-configmap-vzdm no longer exists STEP: Deleting pod pod-subpath-test-configmap-vzdm Apr 7 23:53:37.365: INFO: Deleting pod "pod-subpath-test-configmap-vzdm" in namespace "subpath-923" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:53:37.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-923" for this suite. • [SLOW TEST:24.248 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":87,"skipped":1736,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:53:37.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-2a4c4363-e0d1-4ccb-a954-04ae45c43d3c STEP: Creating a pod to test consume secrets Apr 7 23:53:37.441: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-846be816-359b-47b4-8d41-81965e4f0ff6" in namespace "projected-8997" to be "Succeeded or Failed" Apr 7 23:53:37.449: INFO: Pod "pod-projected-secrets-846be816-359b-47b4-8d41-81965e4f0ff6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061714ms Apr 7 23:53:39.453: INFO: Pod "pod-projected-secrets-846be816-359b-47b4-8d41-81965e4f0ff6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012533518s Apr 7 23:53:41.457: INFO: Pod "pod-projected-secrets-846be816-359b-47b4-8d41-81965e4f0ff6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016111566s STEP: Saw pod success Apr 7 23:53:41.457: INFO: Pod "pod-projected-secrets-846be816-359b-47b4-8d41-81965e4f0ff6" satisfied condition "Succeeded or Failed" Apr 7 23:53:41.459: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-846be816-359b-47b4-8d41-81965e4f0ff6 container projected-secret-volume-test: STEP: delete the pod Apr 7 23:53:41.487: INFO: Waiting for pod pod-projected-secrets-846be816-359b-47b4-8d41-81965e4f0ff6 to disappear Apr 7 23:53:41.497: INFO: Pod pod-projected-secrets-846be816-359b-47b4-8d41-81965e4f0ff6 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:53:41.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8997" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":88,"skipped":1743,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:53:41.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 7 23:53:41.861: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 7 23:53:43.870: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900421, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900421, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900421, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900421, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 7 23:53:46.908: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 7 23:53:46.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5063-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:53:48.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3753" for this suite. STEP: Destroying namespace "webhook-3753-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.664 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":89,"skipped":1747,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:53:48.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:53:59.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6158" for this suite. • [SLOW TEST:11.128 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":90,"skipped":1752,"failed":0} SSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:53:59.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Apr 7 23:53:59.356: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:54:12.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8022" for this suite. • [SLOW TEST:13.487 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":91,"skipped":1759,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:54:12.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Apr 7 23:54:12.849: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4439' Apr 7 23:54:13.130: INFO: stderr: "" Apr 7 23:54:13.131: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 7 23:54:14.135: INFO: Selector matched 1 pods for map[app:agnhost] Apr 7 23:54:14.135: INFO: Found 0 / 1 Apr 7 23:54:15.135: INFO: Selector matched 1 pods for map[app:agnhost] Apr 7 23:54:15.135: INFO: Found 0 / 1 Apr 7 23:54:16.143: INFO: Selector matched 1 pods for map[app:agnhost] Apr 7 23:54:16.143: INFO: Found 1 / 1 Apr 7 23:54:16.143: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Apr 7 23:54:16.145: INFO: Selector matched 1 pods for map[app:agnhost] Apr 7 23:54:16.145: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 7 23:54:16.145: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config patch pod agnhost-master-2r66j --namespace=kubectl-4439 -p {"metadata":{"annotations":{"x":"y"}}}' Apr 7 23:54:16.258: INFO: stderr: "" Apr 7 23:54:16.258: INFO: stdout: "pod/agnhost-master-2r66j patched\n" STEP: checking annotations Apr 7 23:54:16.270: INFO: Selector matched 1 pods for map[app:agnhost] Apr 7 23:54:16.270: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:54:16.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4439" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":275,"completed":92,"skipped":1804,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:54:16.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 7 23:54:16.361: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-eae64abb-d1f7-473d-842a-c67f86732acf" in namespace "security-context-test-6358" to be "Succeeded or Failed" Apr 7 23:54:16.366: INFO: Pod "alpine-nnp-false-eae64abb-d1f7-473d-842a-c67f86732acf": Phase="Pending", Reason="", readiness=false. Elapsed: 5.048048ms Apr 7 23:54:18.369: INFO: Pod "alpine-nnp-false-eae64abb-d1f7-473d-842a-c67f86732acf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007557496s Apr 7 23:54:20.372: INFO: Pod "alpine-nnp-false-eae64abb-d1f7-473d-842a-c67f86732acf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011467498s Apr 7 23:54:20.372: INFO: Pod "alpine-nnp-false-eae64abb-d1f7-473d-842a-c67f86732acf" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:54:20.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6358" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":93,"skipped":1819,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:54:20.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-4774 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 7 23:54:20.439: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 7 23:54:20.518: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 7 23:54:22.521: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 7 23:54:24.522: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 7 23:54:26.521: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 7 23:54:28.522: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 7 23:54:30.522: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 7 23:54:32.522: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 7 23:54:34.522: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 7 23:54:36.522: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 7 23:54:38.522: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 7 23:54:40.522: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 7 23:54:40.528: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 7 23:54:44.552: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.152:8080/dial?request=hostname&protocol=http&host=10.244.2.126&port=8080&tries=1'] Namespace:pod-network-test-4774 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 7 23:54:44.552: INFO: >>> kubeConfig: /root/.kube/config I0407 23:54:44.592695 7 log.go:172] (0xc0026a8d10) (0xc000c73a40) Create stream I0407 23:54:44.592728 7 log.go:172] (0xc0026a8d10) (0xc000c73a40) Stream added, broadcasting: 1 I0407 23:54:44.594736 7 log.go:172] (0xc0026a8d10) Reply frame received for 1 I0407 23:54:44.594782 7 log.go:172] (0xc0026a8d10) (0xc000431e00) Create stream I0407 23:54:44.594791 7 log.go:172] (0xc0026a8d10) (0xc000431e00) Stream added, broadcasting: 3 I0407 23:54:44.595791 7 log.go:172] (0xc0026a8d10) Reply frame received for 3 I0407 23:54:44.595833 7 log.go:172] (0xc0026a8d10) (0xc000f1a000) Create stream I0407 23:54:44.595849 7 log.go:172] (0xc0026a8d10) (0xc000f1a000) Stream added, broadcasting: 5 I0407 23:54:44.596847 7 log.go:172] (0xc0026a8d10) Reply frame received for 5 I0407 23:54:44.681477 7 log.go:172] (0xc0026a8d10) Data frame received for 3 I0407 23:54:44.681509 7 log.go:172] (0xc000431e00) (3) Data frame handling I0407 23:54:44.681531 7 log.go:172] (0xc000431e00) (3) Data frame sent I0407 23:54:44.682277 7 log.go:172] (0xc0026a8d10) Data frame received for 5 I0407 23:54:44.682320 7 log.go:172] (0xc000f1a000) (5) Data frame handling I0407 23:54:44.682347 7 log.go:172] (0xc0026a8d10) Data frame received for 3 I0407 23:54:44.682365 7 log.go:172] (0xc000431e00) (3) Data frame handling I0407 23:54:44.684267 7 log.go:172] (0xc0026a8d10) Data frame received for 1 I0407 23:54:44.684293 7 log.go:172] (0xc000c73a40) (1) Data frame handling I0407 23:54:44.684300 7 log.go:172] (0xc000c73a40) (1) Data frame sent I0407 23:54:44.684316 7 log.go:172] (0xc0026a8d10) (0xc000c73a40) Stream removed, broadcasting: 1 I0407 23:54:44.684366 7 log.go:172] (0xc0026a8d10) Go away received I0407 23:54:44.684557 7 log.go:172] (0xc0026a8d10) (0xc000c73a40) Stream removed, broadcasting: 1 I0407 23:54:44.684568 7 log.go:172] (0xc0026a8d10) (0xc000431e00) Stream removed, broadcasting: 3 I0407 23:54:44.684574 7 log.go:172] (0xc0026a8d10) (0xc000f1a000) Stream removed, broadcasting: 5 Apr 7 23:54:44.684: INFO: Waiting for responses: map[] Apr 7 23:54:44.687: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.152:8080/dial?request=hostname&protocol=http&host=10.244.1.151&port=8080&tries=1'] Namespace:pod-network-test-4774 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 7 23:54:44.687: INFO: >>> kubeConfig: /root/.kube/config I0407 23:54:44.721751 7 log.go:172] (0xc002a64000) (0xc001f42fa0) Create stream I0407 23:54:44.721779 7 log.go:172] (0xc002a64000) (0xc001f42fa0) Stream added, broadcasting: 1 I0407 23:54:44.724040 7 log.go:172] (0xc002a64000) Reply frame received for 1 I0407 23:54:44.724088 7 log.go:172] (0xc002a64000) (0xc000f1a320) Create stream I0407 23:54:44.724107 7 log.go:172] (0xc002a64000) (0xc000f1a320) Stream added, broadcasting: 3 I0407 23:54:44.725520 7 log.go:172] (0xc002a64000) Reply frame received for 3 I0407 23:54:44.725573 7 log.go:172] (0xc002a64000) (0xc001f43040) Create stream I0407 23:54:44.725590 7 log.go:172] (0xc002a64000) (0xc001f43040) Stream added, broadcasting: 5 I0407 23:54:44.726705 7 log.go:172] (0xc002a64000) Reply frame received for 5 I0407 23:54:44.802948 7 log.go:172] (0xc002a64000) Data frame received for 3 I0407 23:54:44.803053 7 log.go:172] (0xc000f1a320) (3) Data frame handling I0407 23:54:44.803085 7 log.go:172] (0xc000f1a320) (3) Data frame sent I0407 23:54:44.803689 7 log.go:172] (0xc002a64000) Data frame received for 5 I0407 23:54:44.803725 7 log.go:172] (0xc001f43040) (5) Data frame handling I0407 23:54:44.804150 7 log.go:172] (0xc002a64000) Data frame received for 3 I0407 23:54:44.804175 7 log.go:172] (0xc000f1a320) (3) Data frame handling I0407 23:54:44.805543 7 log.go:172] (0xc002a64000) Data frame received for 1 I0407 23:54:44.805606 7 log.go:172] (0xc001f42fa0) (1) Data frame handling I0407 23:54:44.805633 7 log.go:172] (0xc001f42fa0) (1) Data frame sent I0407 23:54:44.805667 7 log.go:172] (0xc002a64000) (0xc001f42fa0) Stream removed, broadcasting: 1 I0407 23:54:44.805742 7 log.go:172] (0xc002a64000) Go away received I0407 23:54:44.805906 7 log.go:172] (0xc002a64000) (0xc001f42fa0) Stream removed, broadcasting: 1 I0407 23:54:44.805931 7 log.go:172] (0xc002a64000) (0xc000f1a320) Stream removed, broadcasting: 3 I0407 23:54:44.805942 7 log.go:172] (0xc002a64000) (0xc001f43040) Stream removed, broadcasting: 5 Apr 7 23:54:44.806: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:54:44.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4774" for this suite. • [SLOW TEST:24.425 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":94,"skipped":1832,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:54:44.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 7 23:54:44.860: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 7 23:54:47.816: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9804 create -f -' Apr 7 23:54:51.826: INFO: stderr: "" Apr 7 23:54:51.826: INFO: stdout: "e2e-test-crd-publish-openapi-1191-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 7 23:54:51.826: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9804 delete e2e-test-crd-publish-openapi-1191-crds test-cr' Apr 7 23:54:51.927: INFO: stderr: "" Apr 7 23:54:51.927: INFO: stdout: "e2e-test-crd-publish-openapi-1191-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Apr 7 23:54:51.927: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9804 apply -f -' Apr 7 23:54:52.174: INFO: stderr: "" Apr 7 23:54:52.174: INFO: stdout: "e2e-test-crd-publish-openapi-1191-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 7 23:54:52.174: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9804 delete e2e-test-crd-publish-openapi-1191-crds test-cr' Apr 7 23:54:52.275: INFO: stderr: "" Apr 7 23:54:52.275: INFO: stdout: "e2e-test-crd-publish-openapi-1191-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 7 23:54:52.275: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1191-crds' Apr 7 23:54:52.542: INFO: stderr: "" Apr 7 23:54:52.543: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1191-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:54:55.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9804" for this suite. • [SLOW TEST:10.673 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":95,"skipped":1834,"failed":0} SS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:54:55.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod test-webserver-28988800-27be-476b-96ea-e26933ef946d in namespace container-probe-8507 Apr 7 23:54:59.607: INFO: Started pod test-webserver-28988800-27be-476b-96ea-e26933ef946d in namespace container-probe-8507 STEP: checking the pod's current state and verifying that restartCount is present Apr 7 23:54:59.609: INFO: Initial restart count of pod test-webserver-28988800-27be-476b-96ea-e26933ef946d is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:59:00.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8507" for this suite. • [SLOW TEST:244.758 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":96,"skipped":1836,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:59:00.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 7 23:59:00.323: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:59:05.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-860" for this suite. • [SLOW TEST:5.320 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":97,"skipped":1850,"failed":0} S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:59:05.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-6100 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 7 23:59:05.690: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 7 23:59:05.720: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 7 23:59:07.725: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 7 23:59:09.724: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 7 23:59:11.724: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 7 23:59:13.725: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 7 23:59:15.724: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 7 23:59:17.724: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 7 23:59:19.725: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 7 23:59:19.731: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 7 23:59:21.734: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 7 23:59:25.766: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.128 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6100 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 7 23:59:25.766: INFO: >>> kubeConfig: /root/.kube/config I0407 23:59:25.790244 7 log.go:172] (0xc0026236b0) (0xc000573d60) Create stream I0407 23:59:25.790273 7 log.go:172] (0xc0026236b0) (0xc000573d60) Stream added, broadcasting: 1 I0407 23:59:25.792783 7 log.go:172] (0xc0026236b0) Reply frame received for 1 I0407 23:59:25.792834 7 log.go:172] (0xc0026236b0) (0xc000d7ec80) Create stream I0407 23:59:25.792850 7 log.go:172] (0xc0026236b0) (0xc000d7ec80) Stream added, broadcasting: 3 I0407 23:59:25.794232 7 log.go:172] (0xc0026236b0) Reply frame received for 3 I0407 23:59:25.794266 7 log.go:172] (0xc0026236b0) (0xc000573f40) Create stream I0407 23:59:25.794278 7 log.go:172] (0xc0026236b0) (0xc000573f40) Stream added, broadcasting: 5 I0407 23:59:25.795311 7 log.go:172] (0xc0026236b0) Reply frame received for 5 I0407 23:59:26.854715 7 log.go:172] (0xc0026236b0) Data frame received for 3 I0407 23:59:26.854782 7 log.go:172] (0xc000d7ec80) (3) Data frame handling I0407 23:59:26.854806 7 log.go:172] (0xc000d7ec80) (3) Data frame sent I0407 23:59:26.854837 7 log.go:172] (0xc0026236b0) Data frame received for 3 I0407 23:59:26.854858 7 log.go:172] (0xc000d7ec80) (3) Data frame handling I0407 23:59:26.855106 7 log.go:172] (0xc0026236b0) Data frame received for 5 I0407 23:59:26.855145 7 log.go:172] (0xc000573f40) (5) Data frame handling I0407 23:59:26.857333 7 log.go:172] (0xc0026236b0) Data frame received for 1 I0407 23:59:26.857368 7 log.go:172] (0xc000573d60) (1) Data frame handling I0407 23:59:26.857381 7 log.go:172] (0xc000573d60) (1) Data frame sent I0407 23:59:26.857406 7 log.go:172] (0xc0026236b0) (0xc000573d60) Stream removed, broadcasting: 1 I0407 23:59:26.857511 7 log.go:172] (0xc0026236b0) Go away received I0407 23:59:26.857584 7 log.go:172] (0xc0026236b0) (0xc000573d60) Stream removed, broadcasting: 1 I0407 23:59:26.857611 7 log.go:172] (0xc0026236b0) (0xc000d7ec80) Stream removed, broadcasting: 3 I0407 23:59:26.857625 7 log.go:172] (0xc0026236b0) (0xc000573f40) Stream removed, broadcasting: 5 Apr 7 23:59:26.857: INFO: Found all expected endpoints: [netserver-0] Apr 7 23:59:26.865: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.154 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6100 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 7 23:59:26.865: INFO: >>> kubeConfig: /root/.kube/config I0407 23:59:26.905296 7 log.go:172] (0xc002dcc000) (0xc000d4f180) Create stream I0407 23:59:26.905406 7 log.go:172] (0xc002dcc000) (0xc000d4f180) Stream added, broadcasting: 1 I0407 23:59:26.908117 7 log.go:172] (0xc002dcc000) Reply frame received for 1 I0407 23:59:26.908164 7 log.go:172] (0xc002dcc000) (0xc000cc4f00) Create stream I0407 23:59:26.908177 7 log.go:172] (0xc002dcc000) (0xc000cc4f00) Stream added, broadcasting: 3 I0407 23:59:26.909342 7 log.go:172] (0xc002dcc000) Reply frame received for 3 I0407 23:59:26.909390 7 log.go:172] (0xc002dcc000) (0xc000cc4fa0) Create stream I0407 23:59:26.909406 7 log.go:172] (0xc002dcc000) (0xc000cc4fa0) Stream added, broadcasting: 5 I0407 23:59:26.910243 7 log.go:172] (0xc002dcc000) Reply frame received for 5 I0407 23:59:28.002360 7 log.go:172] (0xc002dcc000) Data frame received for 5 I0407 23:59:28.002432 7 log.go:172] (0xc000cc4fa0) (5) Data frame handling I0407 23:59:28.002476 7 log.go:172] (0xc002dcc000) Data frame received for 3 I0407 23:59:28.002495 7 log.go:172] (0xc000cc4f00) (3) Data frame handling I0407 23:59:28.002511 7 log.go:172] (0xc000cc4f00) (3) Data frame sent I0407 23:59:28.002607 7 log.go:172] (0xc002dcc000) Data frame received for 3 I0407 23:59:28.002630 7 log.go:172] (0xc000cc4f00) (3) Data frame handling I0407 23:59:28.004438 7 log.go:172] (0xc002dcc000) Data frame received for 1 I0407 23:59:28.004473 7 log.go:172] (0xc000d4f180) (1) Data frame handling I0407 23:59:28.004585 7 log.go:172] (0xc000d4f180) (1) Data frame sent I0407 23:59:28.004666 7 log.go:172] (0xc002dcc000) (0xc000d4f180) Stream removed, broadcasting: 1 I0407 23:59:28.004710 7 log.go:172] (0xc002dcc000) Go away received I0407 23:59:28.004816 7 log.go:172] (0xc002dcc000) (0xc000d4f180) Stream removed, broadcasting: 1 I0407 23:59:28.004850 7 log.go:172] (0xc002dcc000) (0xc000cc4f00) Stream removed, broadcasting: 3 I0407 23:59:28.004873 7 log.go:172] (0xc002dcc000) (0xc000cc4fa0) Stream removed, broadcasting: 5 Apr 7 23:59:28.004: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:59:28.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6100" for this suite. • [SLOW TEST:22.448 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":98,"skipped":1851,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:59:28.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206 STEP: creating the pod Apr 7 23:59:28.076: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9925' Apr 7 23:59:28.374: INFO: stderr: "" Apr 7 23:59:28.374: INFO: stdout: "pod/pause created\n" Apr 7 23:59:28.374: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 7 23:59:28.374: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9925" to be "running and ready" Apr 7 23:59:28.383: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.287152ms Apr 7 23:59:30.387: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012306021s Apr 7 23:59:32.391: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.016898086s Apr 7 23:59:32.391: INFO: Pod "pause" satisfied condition "running and ready" Apr 7 23:59:32.391: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: adding the label testing-label with value testing-label-value to a pod Apr 7 23:59:32.392: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-9925' Apr 7 23:59:32.498: INFO: stderr: "" Apr 7 23:59:32.498: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 7 23:59:32.498: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9925' Apr 7 23:59:32.588: INFO: stderr: "" Apr 7 23:59:32.588: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 7 23:59:32.588: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-9925' Apr 7 23:59:32.680: INFO: stderr: "" Apr 7 23:59:32.680: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 7 23:59:32.680: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9925' Apr 7 23:59:32.765: INFO: stderr: "" Apr 7 23:59:32.765: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213 STEP: using delete to clean up resources Apr 7 23:59:32.765: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9925' Apr 7 23:59:32.883: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 7 23:59:32.883: INFO: stdout: "pod \"pause\" force deleted\n" Apr 7 23:59:32.883: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-9925' Apr 7 23:59:32.988: INFO: stderr: "No resources found in kubectl-9925 namespace.\n" Apr 7 23:59:32.988: INFO: stdout: "" Apr 7 23:59:32.988: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-9925 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 7 23:59:33.072: INFO: stderr: "" Apr 7 23:59:33.072: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:59:33.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9925" for this suite. • [SLOW TEST:5.095 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":275,"completed":99,"skipped":1859,"failed":0} SS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:59:33.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 7 23:59:33.831: INFO: (0) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 47.721033ms) Apr 7 23:59:33.953: INFO: (1) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 122.084727ms) Apr 7 23:59:34.001: INFO: (2) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 47.575888ms) Apr 7 23:59:34.024: INFO: (3) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 22.908261ms) Apr 7 23:59:34.082: INFO: (4) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 58.259228ms) Apr 7 23:59:34.090: INFO: (5) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 7.454548ms) Apr 7 23:59:34.095: INFO: (6) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.596231ms) Apr 7 23:59:34.102: INFO: (7) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 6.137225ms) Apr 7 23:59:34.138: INFO: (8) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 36.495734ms) Apr 7 23:59:34.144: INFO: (9) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.975574ms) Apr 7 23:59:34.274: INFO: (10) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 129.602899ms) Apr 7 23:59:34.282: INFO: (11) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 8.302883ms) Apr 7 23:59:34.336: INFO: (12) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 54.011802ms) Apr 7 23:59:34.341: INFO: (13) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.766712ms) Apr 7 23:59:34.526: INFO: (14) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 185.096153ms) Apr 7 23:59:34.623: INFO: (15) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 96.83995ms) Apr 7 23:59:34.718: INFO: (16) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 94.552856ms) Apr 7 23:59:34.725: INFO: (17) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 6.794262ms) Apr 7 23:59:34.729: INFO: (18) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.637405ms) Apr 7 23:59:34.734: INFO: (19) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.194667ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:59:34.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7907" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":275,"completed":100,"skipped":1861,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:59:34.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:59:35.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-7522" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":101,"skipped":1899,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:59:35.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 7 23:59:36.267: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 7 23:59:38.278: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900776, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900776, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900776, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721900776, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 7 23:59:41.309: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 7 23:59:41.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 7 23:59:42.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-2904" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.303 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":102,"skipped":1915,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 7 23:59:42.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 7 23:59:50.784: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 7 23:59:50.793: INFO: Pod pod-with-prestop-exec-hook still exists Apr 7 23:59:52.794: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 7 23:59:52.798: INFO: Pod pod-with-prestop-exec-hook still exists Apr 7 23:59:54.794: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 7 23:59:54.797: INFO: Pod pod-with-prestop-exec-hook still exists Apr 7 23:59:56.794: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 7 23:59:56.798: INFO: Pod pod-with-prestop-exec-hook still exists Apr 7 23:59:58.794: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 7 23:59:58.798: INFO: Pod pod-with-prestop-exec-hook still exists Apr 8 00:00:00.794: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 8 00:00:00.798: INFO: Pod pod-with-prestop-exec-hook still exists Apr 8 00:00:02.794: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 8 00:00:02.798: INFO: Pod pod-with-prestop-exec-hook still exists Apr 8 00:00:04.794: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 8 00:00:04.799: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:00:04.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5343" for this suite. • [SLOW TEST:22.161 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":103,"skipped":1924,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:00:04.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 8 00:00:04.884: INFO: (0) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 5.349473ms) Apr 8 00:00:04.888: INFO: (1) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.833183ms) Apr 8 00:00:04.891: INFO: (2) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.488933ms) Apr 8 00:00:04.908: INFO: (3) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 17.170335ms) Apr 8 00:00:04.913: INFO: (4) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 4.102445ms) Apr 8 00:00:04.916: INFO: (5) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.154465ms) Apr 8 00:00:04.919: INFO: (6) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.189943ms) Apr 8 00:00:04.923: INFO: (7) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.490188ms) Apr 8 00:00:04.926: INFO: (8) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.047165ms) Apr 8 00:00:04.928: INFO: (9) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.652802ms) Apr 8 00:00:04.932: INFO: (10) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.066767ms) Apr 8 00:00:04.935: INFO: (11) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.167145ms) Apr 8 00:00:04.938: INFO: (12) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.955962ms) Apr 8 00:00:04.941: INFO: (13) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.203702ms) Apr 8 00:00:04.944: INFO: (14) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.00959ms) Apr 8 00:00:04.948: INFO: (15) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.476469ms) Apr 8 00:00:04.951: INFO: (16) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.44679ms) Apr 8 00:00:04.955: INFO: (17) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.760642ms) Apr 8 00:00:04.958: INFO: (18) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.371393ms) Apr 8 00:00:04.962: INFO: (19) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.609459ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:00:04.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-9185" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":275,"completed":104,"skipped":1943,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:00:04.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-0aaf1915-52b6-4827-a5c3-2815c74eb674 STEP: Creating a pod to test consume configMaps Apr 8 00:00:05.057: INFO: Waiting up to 5m0s for pod "pod-configmaps-c5288cc0-19dd-49a7-85af-5562182e6ccc" in namespace "configmap-1004" to be "Succeeded or Failed" Apr 8 00:00:05.063: INFO: Pod "pod-configmaps-c5288cc0-19dd-49a7-85af-5562182e6ccc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.337344ms Apr 8 00:00:07.067: INFO: Pod "pod-configmaps-c5288cc0-19dd-49a7-85af-5562182e6ccc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010162122s Apr 8 00:00:09.071: INFO: Pod "pod-configmaps-c5288cc0-19dd-49a7-85af-5562182e6ccc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014199455s STEP: Saw pod success Apr 8 00:00:09.071: INFO: Pod "pod-configmaps-c5288cc0-19dd-49a7-85af-5562182e6ccc" satisfied condition "Succeeded or Failed" Apr 8 00:00:09.074: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-c5288cc0-19dd-49a7-85af-5562182e6ccc container configmap-volume-test: STEP: delete the pod Apr 8 00:00:09.130: INFO: Waiting for pod pod-configmaps-c5288cc0-19dd-49a7-85af-5562182e6ccc to disappear Apr 8 00:00:09.134: INFO: Pod pod-configmaps-c5288cc0-19dd-49a7-85af-5562182e6ccc no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:00:09.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1004" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":105,"skipped":1945,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:00:09.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Apr 8 00:00:09.206: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Apr 8 00:00:09.242: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Apr 8 00:00:09.243: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Apr 8 00:00:09.258: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Apr 8 00:00:09.258: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Apr 8 00:00:09.284: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Apr 8 00:00:09.284: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Apr 8 00:00:16.453: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:00:16.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-4983" for this suite. • [SLOW TEST:7.384 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":106,"skipped":1991,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:00:16.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test hostPath mode Apr 8 00:00:16.617: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-3166" to be "Succeeded or Failed" Apr 8 00:00:16.656: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 38.969369ms Apr 8 00:00:18.660: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042498576s Apr 8 00:00:20.664: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046446334s Apr 8 00:00:22.668: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.050260802s STEP: Saw pod success Apr 8 00:00:22.668: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Apr 8 00:00:22.671: INFO: Trying to get logs from node latest-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Apr 8 00:00:22.807: INFO: Waiting for pod pod-host-path-test to disappear Apr 8 00:00:22.992: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:00:22.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-3166" for this suite. • [SLOW TEST:6.593 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":107,"skipped":2016,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:00:23.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-d74dee5c-bbb4-41ad-847d-3882f9a00ee5 STEP: Creating a pod to test consume secrets Apr 8 00:00:23.534: INFO: Waiting up to 5m0s for pod "pod-secrets-3b83b932-a3cc-4bee-a5bc-f772f319e3b3" in namespace "secrets-1832" to be "Succeeded or Failed" Apr 8 00:00:23.543: INFO: Pod "pod-secrets-3b83b932-a3cc-4bee-a5bc-f772f319e3b3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.975263ms Apr 8 00:00:25.609: INFO: Pod "pod-secrets-3b83b932-a3cc-4bee-a5bc-f772f319e3b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075433224s Apr 8 00:00:27.613: INFO: Pod "pod-secrets-3b83b932-a3cc-4bee-a5bc-f772f319e3b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079782529s STEP: Saw pod success Apr 8 00:00:27.613: INFO: Pod "pod-secrets-3b83b932-a3cc-4bee-a5bc-f772f319e3b3" satisfied condition "Succeeded or Failed" Apr 8 00:00:27.616: INFO: Trying to get logs from node latest-worker pod pod-secrets-3b83b932-a3cc-4bee-a5bc-f772f319e3b3 container secret-volume-test: STEP: delete the pod Apr 8 00:00:27.744: INFO: Waiting for pod pod-secrets-3b83b932-a3cc-4bee-a5bc-f772f319e3b3 to disappear Apr 8 00:00:27.761: INFO: Pod pod-secrets-3b83b932-a3cc-4bee-a5bc-f772f319e3b3 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:00:27.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1832" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":108,"skipped":2021,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:00:27.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 8 00:00:27.893: INFO: Waiting up to 5m0s for pod "downward-api-eda782c7-1e68-44f0-847d-427e79a6c070" in namespace "downward-api-2088" to be "Succeeded or Failed" Apr 8 00:00:27.905: INFO: Pod "downward-api-eda782c7-1e68-44f0-847d-427e79a6c070": Phase="Pending", Reason="", readiness=false. Elapsed: 11.98031ms Apr 8 00:00:29.908: INFO: Pod "downward-api-eda782c7-1e68-44f0-847d-427e79a6c070": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015367426s Apr 8 00:00:31.912: INFO: Pod "downward-api-eda782c7-1e68-44f0-847d-427e79a6c070": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019297833s STEP: Saw pod success Apr 8 00:00:31.912: INFO: Pod "downward-api-eda782c7-1e68-44f0-847d-427e79a6c070" satisfied condition "Succeeded or Failed" Apr 8 00:00:31.915: INFO: Trying to get logs from node latest-worker pod downward-api-eda782c7-1e68-44f0-847d-427e79a6c070 container dapi-container: STEP: delete the pod Apr 8 00:00:31.936: INFO: Waiting for pod downward-api-eda782c7-1e68-44f0-847d-427e79a6c070 to disappear Apr 8 00:00:31.940: INFO: Pod downward-api-eda782c7-1e68-44f0-847d-427e79a6c070 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:00:31.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2088" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":109,"skipped":2051,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:00:31.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-5997 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Apr 8 00:00:32.037: INFO: Found 0 stateful pods, waiting for 3 Apr 8 00:00:42.042: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 8 00:00:42.042: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 8 00:00:42.042: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 8 00:00:42.053: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5997 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 8 00:00:42.292: INFO: stderr: "I0408 00:00:42.175131 986 log.go:172] (0xc000b0f600) (0xc000a5c960) Create stream\nI0408 00:00:42.175174 986 log.go:172] (0xc000b0f600) (0xc000a5c960) Stream added, broadcasting: 1\nI0408 00:00:42.179900 986 log.go:172] (0xc000b0f600) Reply frame received for 1\nI0408 00:00:42.179974 986 log.go:172] (0xc000b0f600) (0xc00047b540) Create stream\nI0408 00:00:42.179989 986 log.go:172] (0xc000b0f600) (0xc00047b540) Stream added, broadcasting: 3\nI0408 00:00:42.180894 986 log.go:172] (0xc000b0f600) Reply frame received for 3\nI0408 00:00:42.180934 986 log.go:172] (0xc000b0f600) (0xc00059ea00) Create stream\nI0408 00:00:42.180949 986 log.go:172] (0xc000b0f600) (0xc00059ea00) Stream added, broadcasting: 5\nI0408 00:00:42.182035 986 log.go:172] (0xc000b0f600) Reply frame received for 5\nI0408 00:00:42.261956 986 log.go:172] (0xc000b0f600) Data frame received for 5\nI0408 00:00:42.261984 986 log.go:172] (0xc00059ea00) (5) Data frame handling\nI0408 00:00:42.262006 986 log.go:172] (0xc00059ea00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0408 00:00:42.284917 986 log.go:172] (0xc000b0f600) Data frame received for 3\nI0408 00:00:42.284954 986 log.go:172] (0xc00047b540) (3) Data frame handling\nI0408 00:00:42.284978 986 log.go:172] (0xc00047b540) (3) Data frame sent\nI0408 00:00:42.284992 986 log.go:172] (0xc000b0f600) Data frame received for 3\nI0408 00:00:42.285003 986 log.go:172] (0xc00047b540) (3) Data frame handling\nI0408 00:00:42.285082 986 log.go:172] (0xc000b0f600) Data frame received for 5\nI0408 00:00:42.285218 986 log.go:172] (0xc00059ea00) (5) Data frame handling\nI0408 00:00:42.287241 986 log.go:172] (0xc000b0f600) Data frame received for 1\nI0408 00:00:42.287268 986 log.go:172] (0xc000a5c960) (1) Data frame handling\nI0408 00:00:42.287281 986 log.go:172] (0xc000a5c960) (1) Data frame sent\nI0408 00:00:42.287299 986 log.go:172] (0xc000b0f600) (0xc000a5c960) Stream removed, broadcasting: 1\nI0408 00:00:42.287416 986 log.go:172] (0xc000b0f600) Go away received\nI0408 00:00:42.287671 986 log.go:172] (0xc000b0f600) (0xc000a5c960) Stream removed, broadcasting: 1\nI0408 00:00:42.287690 986 log.go:172] (0xc000b0f600) (0xc00047b540) Stream removed, broadcasting: 3\nI0408 00:00:42.287700 986 log.go:172] (0xc000b0f600) (0xc00059ea00) Stream removed, broadcasting: 5\n" Apr 8 00:00:42.292: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 8 00:00:42.292: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 8 00:00:52.325: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Apr 8 00:01:02.367: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5997 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 00:01:02.592: INFO: stderr: "I0408 00:01:02.489243 1008 log.go:172] (0xc0009eaa50) (0xc000648280) Create stream\nI0408 00:01:02.489304 1008 log.go:172] (0xc0009eaa50) (0xc000648280) Stream added, broadcasting: 1\nI0408 00:01:02.491848 1008 log.go:172] (0xc0009eaa50) Reply frame received for 1\nI0408 00:01:02.491916 1008 log.go:172] (0xc0009eaa50) (0xc000648320) Create stream\nI0408 00:01:02.491933 1008 log.go:172] (0xc0009eaa50) (0xc000648320) Stream added, broadcasting: 3\nI0408 00:01:02.493045 1008 log.go:172] (0xc0009eaa50) Reply frame received for 3\nI0408 00:01:02.493088 1008 log.go:172] (0xc0009eaa50) (0xc0009a4000) Create stream\nI0408 00:01:02.493102 1008 log.go:172] (0xc0009eaa50) (0xc0009a4000) Stream added, broadcasting: 5\nI0408 00:01:02.494157 1008 log.go:172] (0xc0009eaa50) Reply frame received for 5\nI0408 00:01:02.586577 1008 log.go:172] (0xc0009eaa50) Data frame received for 5\nI0408 00:01:02.586621 1008 log.go:172] (0xc0009a4000) (5) Data frame handling\nI0408 00:01:02.586636 1008 log.go:172] (0xc0009a4000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0408 00:01:02.586673 1008 log.go:172] (0xc0009eaa50) Data frame received for 3\nI0408 00:01:02.586698 1008 log.go:172] (0xc000648320) (3) Data frame handling\nI0408 00:01:02.586728 1008 log.go:172] (0xc000648320) (3) Data frame sent\nI0408 00:01:02.587091 1008 log.go:172] (0xc0009eaa50) Data frame received for 5\nI0408 00:01:02.587105 1008 log.go:172] (0xc0009a4000) (5) Data frame handling\nI0408 00:01:02.587128 1008 log.go:172] (0xc0009eaa50) Data frame received for 3\nI0408 00:01:02.587161 1008 log.go:172] (0xc000648320) (3) Data frame handling\nI0408 00:01:02.588915 1008 log.go:172] (0xc0009eaa50) Data frame received for 1\nI0408 00:01:02.588952 1008 log.go:172] (0xc000648280) (1) Data frame handling\nI0408 00:01:02.588972 1008 log.go:172] (0xc000648280) (1) Data frame sent\nI0408 00:01:02.588993 1008 log.go:172] (0xc0009eaa50) (0xc000648280) Stream removed, broadcasting: 1\nI0408 00:01:02.589017 1008 log.go:172] (0xc0009eaa50) Go away received\nI0408 00:01:02.589410 1008 log.go:172] (0xc0009eaa50) (0xc000648280) Stream removed, broadcasting: 1\nI0408 00:01:02.589426 1008 log.go:172] (0xc0009eaa50) (0xc000648320) Stream removed, broadcasting: 3\nI0408 00:01:02.589432 1008 log.go:172] (0xc0009eaa50) (0xc0009a4000) Stream removed, broadcasting: 5\n" Apr 8 00:01:02.592: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 8 00:01:02.592: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 8 00:01:12.617: INFO: Waiting for StatefulSet statefulset-5997/ss2 to complete update Apr 8 00:01:12.617: INFO: Waiting for Pod statefulset-5997/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 8 00:01:12.617: INFO: Waiting for Pod statefulset-5997/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 8 00:01:12.617: INFO: Waiting for Pod statefulset-5997/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 8 00:01:22.624: INFO: Waiting for StatefulSet statefulset-5997/ss2 to complete update Apr 8 00:01:22.625: INFO: Waiting for Pod statefulset-5997/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 8 00:01:22.625: INFO: Waiting for Pod statefulset-5997/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 8 00:01:32.623: INFO: Waiting for StatefulSet statefulset-5997/ss2 to complete update Apr 8 00:01:32.623: INFO: Waiting for Pod statefulset-5997/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Apr 8 00:01:42.624: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5997 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 8 00:01:42.879: INFO: stderr: "I0408 00:01:42.756777 1031 log.go:172] (0xc00003a6e0) (0xc0006df5e0) Create stream\nI0408 00:01:42.756851 1031 log.go:172] (0xc00003a6e0) (0xc0006df5e0) Stream added, broadcasting: 1\nI0408 00:01:42.759632 1031 log.go:172] (0xc00003a6e0) Reply frame received for 1\nI0408 00:01:42.759688 1031 log.go:172] (0xc00003a6e0) (0xc0006df680) Create stream\nI0408 00:01:42.759720 1031 log.go:172] (0xc00003a6e0) (0xc0006df680) Stream added, broadcasting: 3\nI0408 00:01:42.760567 1031 log.go:172] (0xc00003a6e0) Reply frame received for 3\nI0408 00:01:42.760604 1031 log.go:172] (0xc00003a6e0) (0xc0006df720) Create stream\nI0408 00:01:42.760620 1031 log.go:172] (0xc00003a6e0) (0xc0006df720) Stream added, broadcasting: 5\nI0408 00:01:42.761499 1031 log.go:172] (0xc00003a6e0) Reply frame received for 5\nI0408 00:01:42.849328 1031 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0408 00:01:42.849359 1031 log.go:172] (0xc0006df720) (5) Data frame handling\nI0408 00:01:42.849382 1031 log.go:172] (0xc0006df720) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0408 00:01:42.872304 1031 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0408 00:01:42.872337 1031 log.go:172] (0xc0006df680) (3) Data frame handling\nI0408 00:01:42.872375 1031 log.go:172] (0xc0006df680) (3) Data frame sent\nI0408 00:01:42.872613 1031 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0408 00:01:42.872646 1031 log.go:172] (0xc0006df680) (3) Data frame handling\nI0408 00:01:42.872773 1031 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0408 00:01:42.872789 1031 log.go:172] (0xc0006df720) (5) Data frame handling\nI0408 00:01:42.874784 1031 log.go:172] (0xc00003a6e0) Data frame received for 1\nI0408 00:01:42.874822 1031 log.go:172] (0xc0006df5e0) (1) Data frame handling\nI0408 00:01:42.874840 1031 log.go:172] (0xc0006df5e0) (1) Data frame sent\nI0408 00:01:42.874857 1031 log.go:172] (0xc00003a6e0) (0xc0006df5e0) Stream removed, broadcasting: 1\nI0408 00:01:42.874928 1031 log.go:172] (0xc00003a6e0) Go away received\nI0408 00:01:42.875270 1031 log.go:172] (0xc00003a6e0) (0xc0006df5e0) Stream removed, broadcasting: 1\nI0408 00:01:42.875291 1031 log.go:172] (0xc00003a6e0) (0xc0006df680) Stream removed, broadcasting: 3\nI0408 00:01:42.875313 1031 log.go:172] (0xc00003a6e0) (0xc0006df720) Stream removed, broadcasting: 5\n" Apr 8 00:01:42.880: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 8 00:01:42.880: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 8 00:01:52.915: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Apr 8 00:02:02.982: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5997 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 00:02:03.224: INFO: stderr: "I0408 00:02:03.107312 1051 log.go:172] (0xc000a34630) (0xc0008580a0) Create stream\nI0408 00:02:03.107384 1051 log.go:172] (0xc000a34630) (0xc0008580a0) Stream added, broadcasting: 1\nI0408 00:02:03.110340 1051 log.go:172] (0xc000a34630) Reply frame received for 1\nI0408 00:02:03.110395 1051 log.go:172] (0xc000a34630) (0xc0007e3180) Create stream\nI0408 00:02:03.110412 1051 log.go:172] (0xc000a34630) (0xc0007e3180) Stream added, broadcasting: 3\nI0408 00:02:03.111382 1051 log.go:172] (0xc000a34630) Reply frame received for 3\nI0408 00:02:03.111415 1051 log.go:172] (0xc000a34630) (0xc000858140) Create stream\nI0408 00:02:03.111428 1051 log.go:172] (0xc000a34630) (0xc000858140) Stream added, broadcasting: 5\nI0408 00:02:03.112551 1051 log.go:172] (0xc000a34630) Reply frame received for 5\nI0408 00:02:03.217308 1051 log.go:172] (0xc000a34630) Data frame received for 5\nI0408 00:02:03.217375 1051 log.go:172] (0xc000858140) (5) Data frame handling\nI0408 00:02:03.217398 1051 log.go:172] (0xc000858140) (5) Data frame sent\nI0408 00:02:03.217419 1051 log.go:172] (0xc000a34630) Data frame received for 5\nI0408 00:02:03.217437 1051 log.go:172] (0xc000858140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0408 00:02:03.217467 1051 log.go:172] (0xc000a34630) Data frame received for 3\nI0408 00:02:03.217499 1051 log.go:172] (0xc0007e3180) (3) Data frame handling\nI0408 00:02:03.217525 1051 log.go:172] (0xc0007e3180) (3) Data frame sent\nI0408 00:02:03.217539 1051 log.go:172] (0xc000a34630) Data frame received for 3\nI0408 00:02:03.217554 1051 log.go:172] (0xc0007e3180) (3) Data frame handling\nI0408 00:02:03.219000 1051 log.go:172] (0xc000a34630) Data frame received for 1\nI0408 00:02:03.219038 1051 log.go:172] (0xc0008580a0) (1) Data frame handling\nI0408 00:02:03.219066 1051 log.go:172] (0xc0008580a0) (1) Data frame sent\nI0408 00:02:03.219090 1051 log.go:172] (0xc000a34630) (0xc0008580a0) Stream removed, broadcasting: 1\nI0408 00:02:03.219120 1051 log.go:172] (0xc000a34630) Go away received\nI0408 00:02:03.219546 1051 log.go:172] (0xc000a34630) (0xc0008580a0) Stream removed, broadcasting: 1\nI0408 00:02:03.219570 1051 log.go:172] (0xc000a34630) (0xc0007e3180) Stream removed, broadcasting: 3\nI0408 00:02:03.219583 1051 log.go:172] (0xc000a34630) (0xc000858140) Stream removed, broadcasting: 5\n" Apr 8 00:02:03.225: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 8 00:02:03.225: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 8 00:02:13.246: INFO: Waiting for StatefulSet statefulset-5997/ss2 to complete update Apr 8 00:02:13.246: INFO: Waiting for Pod statefulset-5997/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 8 00:02:13.246: INFO: Waiting for Pod statefulset-5997/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 8 00:02:23.253: INFO: Waiting for StatefulSet statefulset-5997/ss2 to complete update Apr 8 00:02:23.253: INFO: Waiting for Pod statefulset-5997/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 8 00:02:33.255: INFO: Waiting for StatefulSet statefulset-5997/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 8 00:02:43.253: INFO: Deleting all statefulset in ns statefulset-5997 Apr 8 00:02:43.256: INFO: Scaling statefulset ss2 to 0 Apr 8 00:03:13.308: INFO: Waiting for statefulset status.replicas updated to 0 Apr 8 00:03:13.311: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:03:13.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5997" for this suite. • [SLOW TEST:161.386 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":110,"skipped":2114,"failed":0} SSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:03:13.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-7850/configmap-test-de5f5f39-6f7c-4929-9f6a-82c9296e9be6 STEP: Creating a pod to test consume configMaps Apr 8 00:03:13.428: INFO: Waiting up to 5m0s for pod "pod-configmaps-72c2b046-6921-4d46-b801-3541c560d361" in namespace "configmap-7850" to be "Succeeded or Failed" Apr 8 00:03:13.439: INFO: Pod "pod-configmaps-72c2b046-6921-4d46-b801-3541c560d361": Phase="Pending", Reason="", readiness=false. Elapsed: 10.665495ms Apr 8 00:03:15.449: INFO: Pod "pod-configmaps-72c2b046-6921-4d46-b801-3541c560d361": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021096227s Apr 8 00:03:17.515: INFO: Pod "pod-configmaps-72c2b046-6921-4d46-b801-3541c560d361": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.086732769s STEP: Saw pod success Apr 8 00:03:17.515: INFO: Pod "pod-configmaps-72c2b046-6921-4d46-b801-3541c560d361" satisfied condition "Succeeded or Failed" Apr 8 00:03:17.518: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-72c2b046-6921-4d46-b801-3541c560d361 container env-test: STEP: delete the pod Apr 8 00:03:17.543: INFO: Waiting for pod pod-configmaps-72c2b046-6921-4d46-b801-3541c560d361 to disappear Apr 8 00:03:17.549: INFO: Pod pod-configmaps-72c2b046-6921-4d46-b801-3541c560d361 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:03:17.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7850" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":111,"skipped":2120,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:03:17.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Apr 8 00:03:17.609: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:03:31.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-971" for this suite. • [SLOW TEST:14.310 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":112,"skipped":2157,"failed":0} SSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:03:31.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Apr 8 00:03:36.457: INFO: Successfully updated pod "adopt-release-5ktgt" STEP: Checking that the Job readopts the Pod Apr 8 00:03:36.457: INFO: Waiting up to 15m0s for pod "adopt-release-5ktgt" in namespace "job-3529" to be "adopted" Apr 8 00:03:36.498: INFO: Pod "adopt-release-5ktgt": Phase="Running", Reason="", readiness=true. Elapsed: 40.574186ms Apr 8 00:03:38.502: INFO: Pod "adopt-release-5ktgt": Phase="Running", Reason="", readiness=true. Elapsed: 2.045186695s Apr 8 00:03:38.502: INFO: Pod "adopt-release-5ktgt" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Apr 8 00:03:39.011: INFO: Successfully updated pod "adopt-release-5ktgt" STEP: Checking that the Job releases the Pod Apr 8 00:03:39.011: INFO: Waiting up to 15m0s for pod "adopt-release-5ktgt" in namespace "job-3529" to be "released" Apr 8 00:03:39.016: INFO: Pod "adopt-release-5ktgt": Phase="Running", Reason="", readiness=true. Elapsed: 5.168409ms Apr 8 00:03:41.021: INFO: Pod "adopt-release-5ktgt": Phase="Running", Reason="", readiness=true. Elapsed: 2.009938724s Apr 8 00:03:41.021: INFO: Pod "adopt-release-5ktgt" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:03:41.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3529" for this suite. • [SLOW TEST:9.236 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":113,"skipped":2164,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:03:41.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 8 00:03:41.278: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 8 00:03:41.286: INFO: Waiting for terminating namespaces to be deleted... Apr 8 00:03:41.288: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 8 00:03:41.301: INFO: adopt-release-xwwhq from job-3529 started at 2020-04-08 00:03:31 +0000 UTC (1 container statuses recorded) Apr 8 00:03:41.301: INFO: Container c ready: true, restart count 0 Apr 8 00:03:41.301: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 8 00:03:41.301: INFO: Container kindnet-cni ready: true, restart count 0 Apr 8 00:03:41.301: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 8 00:03:41.301: INFO: Container kube-proxy ready: true, restart count 0 Apr 8 00:03:41.301: INFO: adopt-release-5ktgt from job-3529 started at 2020-04-08 00:03:32 +0000 UTC (1 container statuses recorded) Apr 8 00:03:41.301: INFO: Container c ready: true, restart count 0 Apr 8 00:03:41.301: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 8 00:03:41.305: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 8 00:03:41.305: INFO: Container kindnet-cni ready: true, restart count 0 Apr 8 00:03:41.305: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 8 00:03:41.305: INFO: Container kube-proxy ready: true, restart count 0 Apr 8 00:03:41.305: INFO: adopt-release-jbdpj from job-3529 started at 2020-04-08 00:03:39 +0000 UTC (1 container statuses recorded) Apr 8 00:03:41.305: INFO: Container c ready: false, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-6c4e6533-17bb-4c12-a035-e01460ee6c7a 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-6c4e6533-17bb-4c12-a035-e01460ee6c7a off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-6c4e6533-17bb-4c12-a035-e01460ee6c7a [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:08:49.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9981" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:308.433 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":114,"skipped":2184,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:08:49.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Apr 8 00:08:49.587: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2421' Apr 8 00:08:52.214: INFO: stderr: "" Apr 8 00:08:52.214: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 8 00:08:52.214: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2421' Apr 8 00:08:52.381: INFO: stderr: "" Apr 8 00:08:52.381: INFO: stdout: "update-demo-nautilus-f6kc7 update-demo-nautilus-tl5dv " Apr 8 00:08:52.381: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f6kc7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2421' Apr 8 00:08:52.483: INFO: stderr: "" Apr 8 00:08:52.483: INFO: stdout: "" Apr 8 00:08:52.483: INFO: update-demo-nautilus-f6kc7 is created but not running Apr 8 00:08:57.483: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2421' Apr 8 00:08:57.586: INFO: stderr: "" Apr 8 00:08:57.586: INFO: stdout: "update-demo-nautilus-f6kc7 update-demo-nautilus-tl5dv " Apr 8 00:08:57.586: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f6kc7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2421' Apr 8 00:08:57.671: INFO: stderr: "" Apr 8 00:08:57.671: INFO: stdout: "true" Apr 8 00:08:57.671: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f6kc7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2421' Apr 8 00:08:57.758: INFO: stderr: "" Apr 8 00:08:57.758: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 8 00:08:57.758: INFO: validating pod update-demo-nautilus-f6kc7 Apr 8 00:08:57.762: INFO: got data: { "image": "nautilus.jpg" } Apr 8 00:08:57.762: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 8 00:08:57.762: INFO: update-demo-nautilus-f6kc7 is verified up and running Apr 8 00:08:57.762: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tl5dv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2421' Apr 8 00:08:57.863: INFO: stderr: "" Apr 8 00:08:57.864: INFO: stdout: "true" Apr 8 00:08:57.864: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tl5dv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2421' Apr 8 00:08:57.958: INFO: stderr: "" Apr 8 00:08:57.958: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 8 00:08:57.958: INFO: validating pod update-demo-nautilus-tl5dv Apr 8 00:08:57.963: INFO: got data: { "image": "nautilus.jpg" } Apr 8 00:08:57.963: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 8 00:08:57.963: INFO: update-demo-nautilus-tl5dv is verified up and running STEP: using delete to clean up resources Apr 8 00:08:57.963: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2421' Apr 8 00:08:58.059: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 8 00:08:58.059: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 8 00:08:58.060: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2421' Apr 8 00:08:58.162: INFO: stderr: "No resources found in kubectl-2421 namespace.\n" Apr 8 00:08:58.162: INFO: stdout: "" Apr 8 00:08:58.162: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2421 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 8 00:08:58.275: INFO: stderr: "" Apr 8 00:08:58.275: INFO: stdout: "update-demo-nautilus-f6kc7\nupdate-demo-nautilus-tl5dv\n" Apr 8 00:08:58.775: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2421' Apr 8 00:08:58.872: INFO: stderr: "No resources found in kubectl-2421 namespace.\n" Apr 8 00:08:58.872: INFO: stdout: "" Apr 8 00:08:58.872: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2421 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 8 00:08:58.969: INFO: stderr: "" Apr 8 00:08:58.969: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:08:58.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2421" for this suite. • [SLOW TEST:9.477 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":275,"completed":115,"skipped":2186,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:08:59.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 8 00:08:59.106: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cc4b9c4a-934b-46f3-b356-d282c9ff6087" in namespace "downward-api-7676" to be "Succeeded or Failed" Apr 8 00:08:59.116: INFO: Pod "downwardapi-volume-cc4b9c4a-934b-46f3-b356-d282c9ff6087": Phase="Pending", Reason="", readiness=false. Elapsed: 9.486877ms Apr 8 00:09:01.120: INFO: Pod "downwardapi-volume-cc4b9c4a-934b-46f3-b356-d282c9ff6087": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013150364s Apr 8 00:09:03.124: INFO: Pod "downwardapi-volume-cc4b9c4a-934b-46f3-b356-d282c9ff6087": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017302903s STEP: Saw pod success Apr 8 00:09:03.124: INFO: Pod "downwardapi-volume-cc4b9c4a-934b-46f3-b356-d282c9ff6087" satisfied condition "Succeeded or Failed" Apr 8 00:09:03.133: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-cc4b9c4a-934b-46f3-b356-d282c9ff6087 container client-container: STEP: delete the pod Apr 8 00:09:03.239: INFO: Waiting for pod downwardapi-volume-cc4b9c4a-934b-46f3-b356-d282c9ff6087 to disappear Apr 8 00:09:03.255: INFO: Pod downwardapi-volume-cc4b9c4a-934b-46f3-b356-d282c9ff6087 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:09:03.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7676" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":116,"skipped":2194,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:09:03.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name projected-secret-test-df175f39-99a9-4716-ade6-0b66447ce45b STEP: Creating a pod to test consume secrets Apr 8 00:09:03.359: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8fb20661-1266-4319-9c98-882c78cabd37" in namespace "projected-1655" to be "Succeeded or Failed" Apr 8 00:09:03.375: INFO: Pod "pod-projected-secrets-8fb20661-1266-4319-9c98-882c78cabd37": Phase="Pending", Reason="", readiness=false. Elapsed: 16.044064ms Apr 8 00:09:05.378: INFO: Pod "pod-projected-secrets-8fb20661-1266-4319-9c98-882c78cabd37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019784945s Apr 8 00:09:07.383: INFO: Pod "pod-projected-secrets-8fb20661-1266-4319-9c98-882c78cabd37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024145497s STEP: Saw pod success Apr 8 00:09:07.383: INFO: Pod "pod-projected-secrets-8fb20661-1266-4319-9c98-882c78cabd37" satisfied condition "Succeeded or Failed" Apr 8 00:09:07.386: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-8fb20661-1266-4319-9c98-882c78cabd37 container secret-volume-test: STEP: delete the pod Apr 8 00:09:07.456: INFO: Waiting for pod pod-projected-secrets-8fb20661-1266-4319-9c98-882c78cabd37 to disappear Apr 8 00:09:07.470: INFO: Pod pod-projected-secrets-8fb20661-1266-4319-9c98-882c78cabd37 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:09:07.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1655" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":117,"skipped":2200,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:09:07.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:09:11.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3509" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":118,"skipped":2210,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:09:11.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 8 00:09:11.759: INFO: Waiting up to 5m0s for pod "pod-b887ca03-bae7-4576-900b-b83eff8a23cb" in namespace "emptydir-9701" to be "Succeeded or Failed" Apr 8 00:09:11.763: INFO: Pod "pod-b887ca03-bae7-4576-900b-b83eff8a23cb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.647284ms Apr 8 00:09:13.767: INFO: Pod "pod-b887ca03-bae7-4576-900b-b83eff8a23cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007842322s Apr 8 00:09:15.771: INFO: Pod "pod-b887ca03-bae7-4576-900b-b83eff8a23cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011881521s STEP: Saw pod success Apr 8 00:09:15.771: INFO: Pod "pod-b887ca03-bae7-4576-900b-b83eff8a23cb" satisfied condition "Succeeded or Failed" Apr 8 00:09:15.774: INFO: Trying to get logs from node latest-worker2 pod pod-b887ca03-bae7-4576-900b-b83eff8a23cb container test-container: STEP: delete the pod Apr 8 00:09:15.794: INFO: Waiting for pod pod-b887ca03-bae7-4576-900b-b83eff8a23cb to disappear Apr 8 00:09:15.799: INFO: Pod pod-b887ca03-bae7-4576-900b-b83eff8a23cb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:09:15.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9701" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":119,"skipped":2261,"failed":0} ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:09:15.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's command Apr 8 00:09:15.860: INFO: Waiting up to 5m0s for pod "var-expansion-89c67e12-e717-4275-a7e4-594aa6469999" in namespace "var-expansion-7943" to be "Succeeded or Failed" Apr 8 00:09:15.876: INFO: Pod "var-expansion-89c67e12-e717-4275-a7e4-594aa6469999": Phase="Pending", Reason="", readiness=false. Elapsed: 15.40913ms Apr 8 00:09:17.880: INFO: Pod "var-expansion-89c67e12-e717-4275-a7e4-594aa6469999": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019824453s Apr 8 00:09:19.885: INFO: Pod "var-expansion-89c67e12-e717-4275-a7e4-594aa6469999": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024822443s STEP: Saw pod success Apr 8 00:09:19.885: INFO: Pod "var-expansion-89c67e12-e717-4275-a7e4-594aa6469999" satisfied condition "Succeeded or Failed" Apr 8 00:09:19.888: INFO: Trying to get logs from node latest-worker pod var-expansion-89c67e12-e717-4275-a7e4-594aa6469999 container dapi-container: STEP: delete the pod Apr 8 00:09:19.933: INFO: Waiting for pod var-expansion-89c67e12-e717-4275-a7e4-594aa6469999 to disappear Apr 8 00:09:19.943: INFO: Pod var-expansion-89c67e12-e717-4275-a7e4-594aa6469999 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:09:19.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7943" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":120,"skipped":2261,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:09:19.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 8 00:09:20.401: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 8 00:09:22.422: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721901360, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721901360, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721901360, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721901360, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 8 00:09:25.482: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:09:25.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4731" for this suite. STEP: Destroying namespace "webhook-4731-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.770 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":121,"skipped":2267,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:09:25.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 8 00:09:28.863: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:09:28.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1212" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":122,"skipped":2281,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:09:28.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-f14772f9-8584-4b40-ba8a-c1db5676db13 STEP: Creating a pod to test consume secrets Apr 8 00:09:28.983: INFO: Waiting up to 5m0s for pod "pod-secrets-8c1fa046-9112-4e4f-9fff-97cb2a8ab6c8" in namespace "secrets-2918" to be "Succeeded or Failed" Apr 8 00:09:28.991: INFO: Pod "pod-secrets-8c1fa046-9112-4e4f-9fff-97cb2a8ab6c8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.15797ms Apr 8 00:09:30.999: INFO: Pod "pod-secrets-8c1fa046-9112-4e4f-9fff-97cb2a8ab6c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01643473s Apr 8 00:09:33.004: INFO: Pod "pod-secrets-8c1fa046-9112-4e4f-9fff-97cb2a8ab6c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020704888s STEP: Saw pod success Apr 8 00:09:33.004: INFO: Pod "pod-secrets-8c1fa046-9112-4e4f-9fff-97cb2a8ab6c8" satisfied condition "Succeeded or Failed" Apr 8 00:09:33.007: INFO: Trying to get logs from node latest-worker pod pod-secrets-8c1fa046-9112-4e4f-9fff-97cb2a8ab6c8 container secret-volume-test: STEP: delete the pod Apr 8 00:09:33.023: INFO: Waiting for pod pod-secrets-8c1fa046-9112-4e4f-9fff-97cb2a8ab6c8 to disappear Apr 8 00:09:33.046: INFO: Pod pod-secrets-8c1fa046-9112-4e4f-9fff-97cb2a8ab6c8 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:09:33.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2918" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":123,"skipped":2294,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:09:33.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9805.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-9805.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9805.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9805.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-9805.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9805.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 8 00:09:39.328: INFO: DNS probes using dns-9805/dns-test-e5b7230b-4d0c-4660-825b-5296f4f3ec65 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:09:39.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9805" for this suite. • [SLOW TEST:6.374 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":124,"skipped":2328,"failed":0} SS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:09:39.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-e2af7f0d-32d9-44d9-833d-0c3ca5efb105 in namespace container-probe-27 Apr 8 00:09:43.815: INFO: Started pod liveness-e2af7f0d-32d9-44d9-833d-0c3ca5efb105 in namespace container-probe-27 STEP: checking the pod's current state and verifying that restartCount is present Apr 8 00:09:43.818: INFO: Initial restart count of pod liveness-e2af7f0d-32d9-44d9-833d-0c3ca5efb105 is 0 Apr 8 00:10:07.870: INFO: Restart count of pod container-probe-27/liveness-e2af7f0d-32d9-44d9-833d-0c3ca5efb105 is now 1 (24.051541365s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:10:07.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-27" for this suite. • [SLOW TEST:28.503 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":125,"skipped":2330,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:10:07.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-projected-fhsx STEP: Creating a pod to test atomic-volume-subpath Apr 8 00:10:08.007: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-fhsx" in namespace "subpath-2787" to be "Succeeded or Failed" Apr 8 00:10:08.010: INFO: Pod "pod-subpath-test-projected-fhsx": Phase="Pending", Reason="", readiness=false. Elapsed: 3.944753ms Apr 8 00:10:10.018: INFO: Pod "pod-subpath-test-projected-fhsx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011450589s Apr 8 00:10:12.022: INFO: Pod "pod-subpath-test-projected-fhsx": Phase="Running", Reason="", readiness=true. Elapsed: 4.015765531s Apr 8 00:10:14.027: INFO: Pod "pod-subpath-test-projected-fhsx": Phase="Running", Reason="", readiness=true. Elapsed: 6.019988202s Apr 8 00:10:16.031: INFO: Pod "pod-subpath-test-projected-fhsx": Phase="Running", Reason="", readiness=true. Elapsed: 8.023978436s Apr 8 00:10:18.035: INFO: Pod "pod-subpath-test-projected-fhsx": Phase="Running", Reason="", readiness=true. Elapsed: 10.028205488s Apr 8 00:10:20.039: INFO: Pod "pod-subpath-test-projected-fhsx": Phase="Running", Reason="", readiness=true. Elapsed: 12.032558312s Apr 8 00:10:22.046: INFO: Pod "pod-subpath-test-projected-fhsx": Phase="Running", Reason="", readiness=true. Elapsed: 14.039557643s Apr 8 00:10:24.050: INFO: Pod "pod-subpath-test-projected-fhsx": Phase="Running", Reason="", readiness=true. Elapsed: 16.04360321s Apr 8 00:10:26.055: INFO: Pod "pod-subpath-test-projected-fhsx": Phase="Running", Reason="", readiness=true. Elapsed: 18.048079969s Apr 8 00:10:28.059: INFO: Pod "pod-subpath-test-projected-fhsx": Phase="Running", Reason="", readiness=true. Elapsed: 20.052483043s Apr 8 00:10:30.063: INFO: Pod "pod-subpath-test-projected-fhsx": Phase="Running", Reason="", readiness=true. Elapsed: 22.056864821s Apr 8 00:10:32.068: INFO: Pod "pod-subpath-test-projected-fhsx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.061395828s STEP: Saw pod success Apr 8 00:10:32.068: INFO: Pod "pod-subpath-test-projected-fhsx" satisfied condition "Succeeded or Failed" Apr 8 00:10:32.084: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-projected-fhsx container test-container-subpath-projected-fhsx: STEP: delete the pod Apr 8 00:10:32.101: INFO: Waiting for pod pod-subpath-test-projected-fhsx to disappear Apr 8 00:10:32.106: INFO: Pod pod-subpath-test-projected-fhsx no longer exists STEP: Deleting pod pod-subpath-test-projected-fhsx Apr 8 00:10:32.106: INFO: Deleting pod "pod-subpath-test-projected-fhsx" in namespace "subpath-2787" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:10:32.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2787" for this suite. • [SLOW TEST:24.176 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":126,"skipped":2376,"failed":0} [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:10:32.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-2327/configmap-test-191117a9-9106-4312-9d4c-2216463a4536 STEP: Creating a pod to test consume configMaps Apr 8 00:10:32.215: INFO: Waiting up to 5m0s for pod "pod-configmaps-0953264d-49a7-4346-86a5-6f3254e3dd54" in namespace "configmap-2327" to be "Succeeded or Failed" Apr 8 00:10:32.218: INFO: Pod "pod-configmaps-0953264d-49a7-4346-86a5-6f3254e3dd54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.544445ms Apr 8 00:10:34.227: INFO: Pod "pod-configmaps-0953264d-49a7-4346-86a5-6f3254e3dd54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012123204s Apr 8 00:10:36.231: INFO: Pod "pod-configmaps-0953264d-49a7-4346-86a5-6f3254e3dd54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016238159s STEP: Saw pod success Apr 8 00:10:36.232: INFO: Pod "pod-configmaps-0953264d-49a7-4346-86a5-6f3254e3dd54" satisfied condition "Succeeded or Failed" Apr 8 00:10:36.235: INFO: Trying to get logs from node latest-worker pod pod-configmaps-0953264d-49a7-4346-86a5-6f3254e3dd54 container env-test: STEP: delete the pod Apr 8 00:10:36.257: INFO: Waiting for pod pod-configmaps-0953264d-49a7-4346-86a5-6f3254e3dd54 to disappear Apr 8 00:10:36.262: INFO: Pod pod-configmaps-0953264d-49a7-4346-86a5-6f3254e3dd54 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:10:36.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2327" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":127,"skipped":2376,"failed":0} SSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:10:36.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 8 00:10:36.336: INFO: Creating deployment "test-recreate-deployment" Apr 8 00:10:36.350: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 8 00:10:36.371: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 8 00:10:38.378: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 8 00:10:38.381: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721901436, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721901436, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721901436, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721901436, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-846c7dd955\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 8 00:10:40.385: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 8 00:10:40.392: INFO: Updating deployment test-recreate-deployment Apr 8 00:10:40.392: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 8 00:10:40.813: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-4518 /apis/apps/v1/namespaces/deployment-4518/deployments/test-recreate-deployment b97f30ff-6a8a-48e5-8409-929c09461ab0 6275694 2 2020-04-08 00:10:36 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005bc76f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-08 00:10:40 +0000 UTC,LastTransitionTime:2020-04-08 00:10:40 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-04-08 00:10:40 +0000 UTC,LastTransitionTime:2020-04-08 00:10:36 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Apr 8 00:10:40.846: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-4518 /apis/apps/v1/namespaces/deployment-4518/replicasets/test-recreate-deployment-5f94c574ff bd154f7c-bd5e-45ef-ad9a-d2a09042779a 6275693 1 2020-04-08 00:10:40 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment b97f30ff-6a8a-48e5-8409-929c09461ab0 0xc005bc7b07 0xc005bc7b08}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005bc7b88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 8 00:10:40.846: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 8 00:10:40.846: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-846c7dd955 deployment-4518 /apis/apps/v1/namespaces/deployment-4518/replicasets/test-recreate-deployment-846c7dd955 07eb59ca-4b3c-4d3b-89cd-cfcbdbae4332 6275683 2 2020-04-08 00:10:36 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment b97f30ff-6a8a-48e5-8409-929c09461ab0 0xc005bc7c07 0xc005bc7c08}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 846c7dd955,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005bc7c78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 8 00:10:40.898: INFO: Pod "test-recreate-deployment-5f94c574ff-bsfj6" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-bsfj6 test-recreate-deployment-5f94c574ff- deployment-4518 /api/v1/namespaces/deployment-4518/pods/test-recreate-deployment-5f94c574ff-bsfj6 3ac44b44-1978-4ece-a0c6-6b65e60ed2c9 6275696 0 2020-04-08 00:10:40 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff bd154f7c-bd5e-45ef-ad9a-d2a09042779a 0xc002330427 0xc002330428}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-trmb6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-trmb6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-trmb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 00:10:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 00:10:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 00:10:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 00:10:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-08 00:10:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:10:40.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4518" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":128,"skipped":2383,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:10:40.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 8 00:10:41.048: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4257 /api/v1/namespaces/watch-4257/configmaps/e2e-watch-test-configmap-a b828b95a-0cf8-4afd-b2fb-fdb3ae236e21 6275704 0 2020-04-08 00:10:41 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 8 00:10:41.048: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4257 /api/v1/namespaces/watch-4257/configmaps/e2e-watch-test-configmap-a b828b95a-0cf8-4afd-b2fb-fdb3ae236e21 6275704 0 2020-04-08 00:10:41 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 8 00:10:51.055: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4257 /api/v1/namespaces/watch-4257/configmaps/e2e-watch-test-configmap-a b828b95a-0cf8-4afd-b2fb-fdb3ae236e21 6275765 0 2020-04-08 00:10:41 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 8 00:10:51.055: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4257 /api/v1/namespaces/watch-4257/configmaps/e2e-watch-test-configmap-a b828b95a-0cf8-4afd-b2fb-fdb3ae236e21 6275765 0 2020-04-08 00:10:41 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 8 00:11:01.063: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4257 /api/v1/namespaces/watch-4257/configmaps/e2e-watch-test-configmap-a b828b95a-0cf8-4afd-b2fb-fdb3ae236e21 6275795 0 2020-04-08 00:10:41 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 8 00:11:01.063: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4257 /api/v1/namespaces/watch-4257/configmaps/e2e-watch-test-configmap-a b828b95a-0cf8-4afd-b2fb-fdb3ae236e21 6275795 0 2020-04-08 00:10:41 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 8 00:11:11.071: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4257 /api/v1/namespaces/watch-4257/configmaps/e2e-watch-test-configmap-a b828b95a-0cf8-4afd-b2fb-fdb3ae236e21 6275825 0 2020-04-08 00:10:41 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 8 00:11:11.071: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4257 /api/v1/namespaces/watch-4257/configmaps/e2e-watch-test-configmap-a b828b95a-0cf8-4afd-b2fb-fdb3ae236e21 6275825 0 2020-04-08 00:10:41 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 8 00:11:21.093: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4257 /api/v1/namespaces/watch-4257/configmaps/e2e-watch-test-configmap-b 547b022e-b750-4003-a5a9-56f16455cf3b 6275855 0 2020-04-08 00:11:21 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 8 00:11:21.093: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4257 /api/v1/namespaces/watch-4257/configmaps/e2e-watch-test-configmap-b 547b022e-b750-4003-a5a9-56f16455cf3b 6275855 0 2020-04-08 00:11:21 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 8 00:11:31.100: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4257 /api/v1/namespaces/watch-4257/configmaps/e2e-watch-test-configmap-b 547b022e-b750-4003-a5a9-56f16455cf3b 6275885 0 2020-04-08 00:11:21 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 8 00:11:31.100: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4257 /api/v1/namespaces/watch-4257/configmaps/e2e-watch-test-configmap-b 547b022e-b750-4003-a5a9-56f16455cf3b 6275885 0 2020-04-08 00:11:21 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:11:41.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4257" for this suite. • [SLOW TEST:60.155 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":129,"skipped":2394,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:11:41.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 8 00:11:41.609: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 8 00:11:43.619: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721901501, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721901501, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721901501, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721901501, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 8 00:11:45.623: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721901501, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721901501, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721901501, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721901501, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 8 00:11:48.650: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Apr 8 00:11:52.687: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config attach --namespace=webhook-3180 to-be-attached-pod -i -c=container1' Apr 8 00:11:52.805: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:11:52.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3180" for this suite. STEP: Destroying namespace "webhook-3180-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.784 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":130,"skipped":2427,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:11:52.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 8 00:11:52.949: INFO: Waiting up to 5m0s for pod "pod-67e58c09-4069-45d1-a062-6bdae220eef5" in namespace "emptydir-7460" to be "Succeeded or Failed" Apr 8 00:11:52.959: INFO: Pod "pod-67e58c09-4069-45d1-a062-6bdae220eef5": Phase="Pending", Reason="", readiness=false. Elapsed: 9.576884ms Apr 8 00:11:54.963: INFO: Pod "pod-67e58c09-4069-45d1-a062-6bdae220eef5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013455825s Apr 8 00:11:56.967: INFO: Pod "pod-67e58c09-4069-45d1-a062-6bdae220eef5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017791567s STEP: Saw pod success Apr 8 00:11:56.967: INFO: Pod "pod-67e58c09-4069-45d1-a062-6bdae220eef5" satisfied condition "Succeeded or Failed" Apr 8 00:11:56.971: INFO: Trying to get logs from node latest-worker2 pod pod-67e58c09-4069-45d1-a062-6bdae220eef5 container test-container: STEP: delete the pod Apr 8 00:11:57.004: INFO: Waiting for pod pod-67e58c09-4069-45d1-a062-6bdae220eef5 to disappear Apr 8 00:11:57.019: INFO: Pod pod-67e58c09-4069-45d1-a062-6bdae220eef5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:11:57.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7460" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":131,"skipped":2440,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:11:57.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:12:14.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-659" for this suite. • [SLOW TEST:17.113 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":132,"skipped":2444,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:12:14.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:12:28.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5012" for this suite. • [SLOW TEST:14.106 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":133,"skipped":2454,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:12:28.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 8 00:12:28.361: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b04b1527-c1ff-46c0-8685-3e5c31b0e712" in namespace "projected-7909" to be "Succeeded or Failed" Apr 8 00:12:28.381: INFO: Pod "downwardapi-volume-b04b1527-c1ff-46c0-8685-3e5c31b0e712": Phase="Pending", Reason="", readiness=false. Elapsed: 20.057618ms Apr 8 00:12:30.385: INFO: Pod "downwardapi-volume-b04b1527-c1ff-46c0-8685-3e5c31b0e712": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023923296s Apr 8 00:12:32.390: INFO: Pod "downwardapi-volume-b04b1527-c1ff-46c0-8685-3e5c31b0e712": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028342956s STEP: Saw pod success Apr 8 00:12:32.390: INFO: Pod "downwardapi-volume-b04b1527-c1ff-46c0-8685-3e5c31b0e712" satisfied condition "Succeeded or Failed" Apr 8 00:12:32.393: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-b04b1527-c1ff-46c0-8685-3e5c31b0e712 container client-container: STEP: delete the pod Apr 8 00:12:32.409: INFO: Waiting for pod downwardapi-volume-b04b1527-c1ff-46c0-8685-3e5c31b0e712 to disappear Apr 8 00:12:32.414: INFO: Pod downwardapi-volume-b04b1527-c1ff-46c0-8685-3e5c31b0e712 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:12:32.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7909" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":134,"skipped":2457,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:12:32.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 8 00:12:32.482: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 8 00:12:32.492: INFO: Waiting for terminating namespaces to be deleted... Apr 8 00:12:32.494: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 8 00:12:32.512: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 8 00:12:32.512: INFO: Container kindnet-cni ready: true, restart count 0 Apr 8 00:12:32.512: INFO: fail-once-local-dtbkq from job-5012 started at 2020-04-08 00:12:19 +0000 UTC (1 container statuses recorded) Apr 8 00:12:32.512: INFO: Container c ready: false, restart count 1 Apr 8 00:12:32.512: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 8 00:12:32.512: INFO: Container kube-proxy ready: true, restart count 0 Apr 8 00:12:32.512: INFO: fail-once-local-9rmbd from job-5012 started at 2020-04-08 00:12:14 +0000 UTC (1 container statuses recorded) Apr 8 00:12:32.512: INFO: Container c ready: false, restart count 1 Apr 8 00:12:32.512: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 8 00:12:32.517: INFO: fail-once-local-vf5vx from job-5012 started at 2020-04-08 00:12:14 +0000 UTC (1 container statuses recorded) Apr 8 00:12:32.517: INFO: Container c ready: false, restart count 1 Apr 8 00:12:32.517: INFO: fail-once-local-nqmx7 from job-5012 started at 2020-04-08 00:12:20 +0000 UTC (1 container statuses recorded) Apr 8 00:12:32.517: INFO: Container c ready: false, restart count 1 Apr 8 00:12:32.517: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 8 00:12:32.517: INFO: Container kindnet-cni ready: true, restart count 0 Apr 8 00:12:32.517: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 8 00:12:32.517: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-9c00f117-24ac-44c0-8e1e-338dcd376789 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-9c00f117-24ac-44c0-8e1e-338dcd376789 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-9c00f117-24ac-44c0-8e1e-338dcd376789 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:12:40.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2220" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:8.294 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":275,"completed":135,"skipped":2462,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:12:40.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-ee4a19c5-1f80-4516-a8eb-e7aa57c732e6 in namespace container-probe-838 Apr 8 00:12:44.809: INFO: Started pod busybox-ee4a19c5-1f80-4516-a8eb-e7aa57c732e6 in namespace container-probe-838 STEP: checking the pod's current state and verifying that restartCount is present Apr 8 00:12:44.812: INFO: Initial restart count of pod busybox-ee4a19c5-1f80-4516-a8eb-e7aa57c732e6 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:16:45.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-838" for this suite. • [SLOW TEST:244.632 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":136,"skipped":2470,"failed":0} SSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:16:45.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 8 00:16:55.474: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7322 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 00:16:55.474: INFO: >>> kubeConfig: /root/.kube/config I0408 00:16:55.516823 7 log.go:172] (0xc002d902c0) (0xc000f1bae0) Create stream I0408 00:16:55.516869 7 log.go:172] (0xc002d902c0) (0xc000f1bae0) Stream added, broadcasting: 1 I0408 00:16:55.519739 7 log.go:172] (0xc002d902c0) Reply frame received for 1 I0408 00:16:55.519792 7 log.go:172] (0xc002d902c0) (0xc000dfb360) Create stream I0408 00:16:55.519824 7 log.go:172] (0xc002d902c0) (0xc000dfb360) Stream added, broadcasting: 3 I0408 00:16:55.521018 7 log.go:172] (0xc002d902c0) Reply frame received for 3 I0408 00:16:55.521073 7 log.go:172] (0xc002d902c0) (0xc0029b37c0) Create stream I0408 00:16:55.521089 7 log.go:172] (0xc002d902c0) (0xc0029b37c0) Stream added, broadcasting: 5 I0408 00:16:55.522152 7 log.go:172] (0xc002d902c0) Reply frame received for 5 I0408 00:16:55.596749 7 log.go:172] (0xc002d902c0) Data frame received for 5 I0408 00:16:55.596796 7 log.go:172] (0xc0029b37c0) (5) Data frame handling I0408 00:16:55.596833 7 log.go:172] (0xc002d902c0) Data frame received for 3 I0408 00:16:55.596849 7 log.go:172] (0xc000dfb360) (3) Data frame handling I0408 00:16:55.596872 7 log.go:172] (0xc000dfb360) (3) Data frame sent I0408 00:16:55.596888 7 log.go:172] (0xc002d902c0) Data frame received for 3 I0408 00:16:55.596901 7 log.go:172] (0xc000dfb360) (3) Data frame handling I0408 00:16:55.598229 7 log.go:172] (0xc002d902c0) Data frame received for 1 I0408 00:16:55.598246 7 log.go:172] (0xc000f1bae0) (1) Data frame handling I0408 00:16:55.598256 7 log.go:172] (0xc000f1bae0) (1) Data frame sent I0408 00:16:55.598269 7 log.go:172] (0xc002d902c0) (0xc000f1bae0) Stream removed, broadcasting: 1 I0408 00:16:55.598284 7 log.go:172] (0xc002d902c0) Go away received I0408 00:16:55.598444 7 log.go:172] (0xc002d902c0) (0xc000f1bae0) Stream removed, broadcasting: 1 I0408 00:16:55.598459 7 log.go:172] (0xc002d902c0) (0xc000dfb360) Stream removed, broadcasting: 3 I0408 00:16:55.598467 7 log.go:172] (0xc002d902c0) (0xc0029b37c0) Stream removed, broadcasting: 5 Apr 8 00:16:55.598: INFO: Exec stderr: "" Apr 8 00:16:55.598: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7322 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 00:16:55.598: INFO: >>> kubeConfig: /root/.kube/config I0408 00:16:55.626357 7 log.go:172] (0xc002d908f0) (0xc000252c80) Create stream I0408 00:16:55.626395 7 log.go:172] (0xc002d908f0) (0xc000252c80) Stream added, broadcasting: 1 I0408 00:16:55.628710 7 log.go:172] (0xc002d908f0) Reply frame received for 1 I0408 00:16:55.628750 7 log.go:172] (0xc002d908f0) (0xc0011640a0) Create stream I0408 00:16:55.628764 7 log.go:172] (0xc002d908f0) (0xc0011640a0) Stream added, broadcasting: 3 I0408 00:16:55.629957 7 log.go:172] (0xc002d908f0) Reply frame received for 3 I0408 00:16:55.630004 7 log.go:172] (0xc002d908f0) (0xc000534140) Create stream I0408 00:16:55.630018 7 log.go:172] (0xc002d908f0) (0xc000534140) Stream added, broadcasting: 5 I0408 00:16:55.631041 7 log.go:172] (0xc002d908f0) Reply frame received for 5 I0408 00:16:55.692086 7 log.go:172] (0xc002d908f0) Data frame received for 5 I0408 00:16:55.692141 7 log.go:172] (0xc000534140) (5) Data frame handling I0408 00:16:55.692167 7 log.go:172] (0xc002d908f0) Data frame received for 3 I0408 00:16:55.692184 7 log.go:172] (0xc0011640a0) (3) Data frame handling I0408 00:16:55.692198 7 log.go:172] (0xc0011640a0) (3) Data frame sent I0408 00:16:55.692210 7 log.go:172] (0xc002d908f0) Data frame received for 3 I0408 00:16:55.692281 7 log.go:172] (0xc0011640a0) (3) Data frame handling I0408 00:16:55.693831 7 log.go:172] (0xc002d908f0) Data frame received for 1 I0408 00:16:55.693866 7 log.go:172] (0xc000252c80) (1) Data frame handling I0408 00:16:55.693889 7 log.go:172] (0xc000252c80) (1) Data frame sent I0408 00:16:55.693913 7 log.go:172] (0xc002d908f0) (0xc000252c80) Stream removed, broadcasting: 1 I0408 00:16:55.693966 7 log.go:172] (0xc002d908f0) Go away received I0408 00:16:55.694082 7 log.go:172] (0xc002d908f0) (0xc000252c80) Stream removed, broadcasting: 1 I0408 00:16:55.694115 7 log.go:172] (0xc002d908f0) (0xc0011640a0) Stream removed, broadcasting: 3 I0408 00:16:55.694128 7 log.go:172] (0xc002d908f0) (0xc000534140) Stream removed, broadcasting: 5 Apr 8 00:16:55.694: INFO: Exec stderr: "" Apr 8 00:16:55.694: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7322 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 00:16:55.694: INFO: >>> kubeConfig: /root/.kube/config I0408 00:16:55.722816 7 log.go:172] (0xc002d91080) (0xc0002c8a00) Create stream I0408 00:16:55.722865 7 log.go:172] (0xc002d91080) (0xc0002c8a00) Stream added, broadcasting: 1 I0408 00:16:55.725526 7 log.go:172] (0xc002d91080) Reply frame received for 1 I0408 00:16:55.725577 7 log.go:172] (0xc002d91080) (0xc001274140) Create stream I0408 00:16:55.725591 7 log.go:172] (0xc002d91080) (0xc001274140) Stream added, broadcasting: 3 I0408 00:16:55.726724 7 log.go:172] (0xc002d91080) Reply frame received for 3 I0408 00:16:55.726777 7 log.go:172] (0xc002d91080) (0xc001274280) Create stream I0408 00:16:55.726799 7 log.go:172] (0xc002d91080) (0xc001274280) Stream added, broadcasting: 5 I0408 00:16:55.727842 7 log.go:172] (0xc002d91080) Reply frame received for 5 I0408 00:16:55.796609 7 log.go:172] (0xc002d91080) Data frame received for 3 I0408 00:16:55.796636 7 log.go:172] (0xc001274140) (3) Data frame handling I0408 00:16:55.796647 7 log.go:172] (0xc001274140) (3) Data frame sent I0408 00:16:55.796652 7 log.go:172] (0xc002d91080) Data frame received for 3 I0408 00:16:55.796656 7 log.go:172] (0xc001274140) (3) Data frame handling I0408 00:16:55.796762 7 log.go:172] (0xc002d91080) Data frame received for 5 I0408 00:16:55.796804 7 log.go:172] (0xc001274280) (5) Data frame handling I0408 00:16:55.798770 7 log.go:172] (0xc002d91080) Data frame received for 1 I0408 00:16:55.798786 7 log.go:172] (0xc0002c8a00) (1) Data frame handling I0408 00:16:55.798794 7 log.go:172] (0xc0002c8a00) (1) Data frame sent I0408 00:16:55.798921 7 log.go:172] (0xc002d91080) (0xc0002c8a00) Stream removed, broadcasting: 1 I0408 00:16:55.799037 7 log.go:172] (0xc002d91080) Go away received I0408 00:16:55.799104 7 log.go:172] (0xc002d91080) (0xc0002c8a00) Stream removed, broadcasting: 1 I0408 00:16:55.799138 7 log.go:172] (0xc002d91080) (0xc001274140) Stream removed, broadcasting: 3 I0408 00:16:55.799154 7 log.go:172] (0xc002d91080) (0xc001274280) Stream removed, broadcasting: 5 Apr 8 00:16:55.799: INFO: Exec stderr: "" Apr 8 00:16:55.799: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7322 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 00:16:55.799: INFO: >>> kubeConfig: /root/.kube/config I0408 00:16:55.825522 7 log.go:172] (0xc002d916b0) (0xc001274780) Create stream I0408 00:16:55.825569 7 log.go:172] (0xc002d916b0) (0xc001274780) Stream added, broadcasting: 1 I0408 00:16:55.827973 7 log.go:172] (0xc002d916b0) Reply frame received for 1 I0408 00:16:55.828022 7 log.go:172] (0xc002d916b0) (0xc001274aa0) Create stream I0408 00:16:55.828036 7 log.go:172] (0xc002d916b0) (0xc001274aa0) Stream added, broadcasting: 3 I0408 00:16:55.828882 7 log.go:172] (0xc002d916b0) Reply frame received for 3 I0408 00:16:55.828919 7 log.go:172] (0xc002d916b0) (0xc001274b40) Create stream I0408 00:16:55.828937 7 log.go:172] (0xc002d916b0) (0xc001274b40) Stream added, broadcasting: 5 I0408 00:16:55.829845 7 log.go:172] (0xc002d916b0) Reply frame received for 5 I0408 00:16:55.888279 7 log.go:172] (0xc002d916b0) Data frame received for 5 I0408 00:16:55.888313 7 log.go:172] (0xc001274b40) (5) Data frame handling I0408 00:16:55.888365 7 log.go:172] (0xc002d916b0) Data frame received for 3 I0408 00:16:55.888469 7 log.go:172] (0xc001274aa0) (3) Data frame handling I0408 00:16:55.888524 7 log.go:172] (0xc001274aa0) (3) Data frame sent I0408 00:16:55.888553 7 log.go:172] (0xc002d916b0) Data frame received for 3 I0408 00:16:55.888575 7 log.go:172] (0xc001274aa0) (3) Data frame handling I0408 00:16:55.890087 7 log.go:172] (0xc002d916b0) Data frame received for 1 I0408 00:16:55.890110 7 log.go:172] (0xc001274780) (1) Data frame handling I0408 00:16:55.890127 7 log.go:172] (0xc001274780) (1) Data frame sent I0408 00:16:55.890256 7 log.go:172] (0xc002d916b0) (0xc001274780) Stream removed, broadcasting: 1 I0408 00:16:55.890336 7 log.go:172] (0xc002d916b0) (0xc001274780) Stream removed, broadcasting: 1 I0408 00:16:55.890356 7 log.go:172] (0xc002d916b0) (0xc001274aa0) Stream removed, broadcasting: 3 I0408 00:16:55.890460 7 log.go:172] (0xc002d916b0) Go away received I0408 00:16:55.890506 7 log.go:172] (0xc002d916b0) (0xc001274b40) Stream removed, broadcasting: 5 Apr 8 00:16:55.890: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 8 00:16:55.890: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7322 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 00:16:55.890: INFO: >>> kubeConfig: /root/.kube/config I0408 00:16:55.925420 7 log.go:172] (0xc0026236b0) (0xc000dfb540) Create stream I0408 00:16:55.925446 7 log.go:172] (0xc0026236b0) (0xc000dfb540) Stream added, broadcasting: 1 I0408 00:16:55.928896 7 log.go:172] (0xc0026236b0) Reply frame received for 1 I0408 00:16:55.928935 7 log.go:172] (0xc0026236b0) (0xc000dfb860) Create stream I0408 00:16:55.928950 7 log.go:172] (0xc0026236b0) (0xc000dfb860) Stream added, broadcasting: 3 I0408 00:16:55.930167 7 log.go:172] (0xc0026236b0) Reply frame received for 3 I0408 00:16:55.930217 7 log.go:172] (0xc0026236b0) (0xc000dfbcc0) Create stream I0408 00:16:55.930233 7 log.go:172] (0xc0026236b0) (0xc000dfbcc0) Stream added, broadcasting: 5 I0408 00:16:55.931310 7 log.go:172] (0xc0026236b0) Reply frame received for 5 I0408 00:16:55.996952 7 log.go:172] (0xc0026236b0) Data frame received for 5 I0408 00:16:55.996994 7 log.go:172] (0xc000dfbcc0) (5) Data frame handling I0408 00:16:55.997018 7 log.go:172] (0xc0026236b0) Data frame received for 3 I0408 00:16:55.997040 7 log.go:172] (0xc000dfb860) (3) Data frame handling I0408 00:16:55.997057 7 log.go:172] (0xc000dfb860) (3) Data frame sent I0408 00:16:55.997067 7 log.go:172] (0xc0026236b0) Data frame received for 3 I0408 00:16:55.997077 7 log.go:172] (0xc000dfb860) (3) Data frame handling I0408 00:16:56.007394 7 log.go:172] (0xc0026236b0) Data frame received for 1 I0408 00:16:56.007411 7 log.go:172] (0xc000dfb540) (1) Data frame handling I0408 00:16:56.007418 7 log.go:172] (0xc000dfb540) (1) Data frame sent I0408 00:16:56.007427 7 log.go:172] (0xc0026236b0) (0xc000dfb540) Stream removed, broadcasting: 1 I0408 00:16:56.007434 7 log.go:172] (0xc0026236b0) Go away received I0408 00:16:56.007583 7 log.go:172] (0xc0026236b0) (0xc000dfb540) Stream removed, broadcasting: 1 I0408 00:16:56.007609 7 log.go:172] (0xc0026236b0) (0xc000dfb860) Stream removed, broadcasting: 3 I0408 00:16:56.007626 7 log.go:172] (0xc0026236b0) (0xc000dfbcc0) Stream removed, broadcasting: 5 Apr 8 00:16:56.007: INFO: Exec stderr: "" Apr 8 00:16:56.007: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7322 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 00:16:56.007: INFO: >>> kubeConfig: /root/.kube/config I0408 00:16:56.030453 7 log.go:172] (0xc006178420) (0xc0011645a0) Create stream I0408 00:16:56.030479 7 log.go:172] (0xc006178420) (0xc0011645a0) Stream added, broadcasting: 1 I0408 00:16:56.033362 7 log.go:172] (0xc006178420) Reply frame received for 1 I0408 00:16:56.033396 7 log.go:172] (0xc006178420) (0xc001164820) Create stream I0408 00:16:56.033415 7 log.go:172] (0xc006178420) (0xc001164820) Stream added, broadcasting: 3 I0408 00:16:56.034355 7 log.go:172] (0xc006178420) Reply frame received for 3 I0408 00:16:56.034387 7 log.go:172] (0xc006178420) (0xc001164b40) Create stream I0408 00:16:56.034401 7 log.go:172] (0xc006178420) (0xc001164b40) Stream added, broadcasting: 5 I0408 00:16:56.035346 7 log.go:172] (0xc006178420) Reply frame received for 5 I0408 00:16:56.113696 7 log.go:172] (0xc006178420) Data frame received for 3 I0408 00:16:56.113755 7 log.go:172] (0xc001164820) (3) Data frame handling I0408 00:16:56.113782 7 log.go:172] (0xc001164820) (3) Data frame sent I0408 00:16:56.113801 7 log.go:172] (0xc006178420) Data frame received for 3 I0408 00:16:56.113813 7 log.go:172] (0xc001164820) (3) Data frame handling I0408 00:16:56.113847 7 log.go:172] (0xc006178420) Data frame received for 5 I0408 00:16:56.113875 7 log.go:172] (0xc001164b40) (5) Data frame handling I0408 00:16:56.115219 7 log.go:172] (0xc006178420) Data frame received for 1 I0408 00:16:56.115251 7 log.go:172] (0xc0011645a0) (1) Data frame handling I0408 00:16:56.115275 7 log.go:172] (0xc0011645a0) (1) Data frame sent I0408 00:16:56.115343 7 log.go:172] (0xc006178420) (0xc0011645a0) Stream removed, broadcasting: 1 I0408 00:16:56.115369 7 log.go:172] (0xc006178420) Go away received I0408 00:16:56.115483 7 log.go:172] (0xc006178420) (0xc0011645a0) Stream removed, broadcasting: 1 I0408 00:16:56.115508 7 log.go:172] (0xc006178420) (0xc001164820) Stream removed, broadcasting: 3 I0408 00:16:56.115520 7 log.go:172] (0xc006178420) (0xc001164b40) Stream removed, broadcasting: 5 Apr 8 00:16:56.115: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 8 00:16:56.115: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7322 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 00:16:56.115: INFO: >>> kubeConfig: /root/.kube/config I0408 00:16:56.155697 7 log.go:172] (0xc002623ef0) (0xc0011bc140) Create stream I0408 00:16:56.155726 7 log.go:172] (0xc002623ef0) (0xc0011bc140) Stream added, broadcasting: 1 I0408 00:16:56.158374 7 log.go:172] (0xc002623ef0) Reply frame received for 1 I0408 00:16:56.158406 7 log.go:172] (0xc002623ef0) (0xc000185900) Create stream I0408 00:16:56.158418 7 log.go:172] (0xc002623ef0) (0xc000185900) Stream added, broadcasting: 3 I0408 00:16:56.159221 7 log.go:172] (0xc002623ef0) Reply frame received for 3 I0408 00:16:56.159237 7 log.go:172] (0xc002623ef0) (0xc001756320) Create stream I0408 00:16:56.159243 7 log.go:172] (0xc002623ef0) (0xc001756320) Stream added, broadcasting: 5 I0408 00:16:56.160072 7 log.go:172] (0xc002623ef0) Reply frame received for 5 I0408 00:16:56.215814 7 log.go:172] (0xc002623ef0) Data frame received for 5 I0408 00:16:56.215840 7 log.go:172] (0xc001756320) (5) Data frame handling I0408 00:16:56.215874 7 log.go:172] (0xc002623ef0) Data frame received for 1 I0408 00:16:56.215910 7 log.go:172] (0xc0011bc140) (1) Data frame handling I0408 00:16:56.215928 7 log.go:172] (0xc0011bc140) (1) Data frame sent I0408 00:16:56.215945 7 log.go:172] (0xc002623ef0) (0xc0011bc140) Stream removed, broadcasting: 1 I0408 00:16:56.216000 7 log.go:172] (0xc002623ef0) Data frame received for 3 I0408 00:16:56.216041 7 log.go:172] (0xc000185900) (3) Data frame handling I0408 00:16:56.216056 7 log.go:172] (0xc000185900) (3) Data frame sent I0408 00:16:56.216069 7 log.go:172] (0xc002623ef0) Data frame received for 3 I0408 00:16:56.216077 7 log.go:172] (0xc000185900) (3) Data frame handling I0408 00:16:56.216092 7 log.go:172] (0xc002623ef0) Go away received I0408 00:16:56.216174 7 log.go:172] (0xc002623ef0) (0xc0011bc140) Stream removed, broadcasting: 1 I0408 00:16:56.216198 7 log.go:172] (0xc002623ef0) (0xc000185900) Stream removed, broadcasting: 3 I0408 00:16:56.216210 7 log.go:172] (0xc002623ef0) (0xc001756320) Stream removed, broadcasting: 5 Apr 8 00:16:56.216: INFO: Exec stderr: "" Apr 8 00:16:56.216: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7322 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 00:16:56.216: INFO: >>> kubeConfig: /root/.kube/config I0408 00:16:56.244943 7 log.go:172] (0xc002dcc000) (0xc0029b3c20) Create stream I0408 00:16:56.244987 7 log.go:172] (0xc002dcc000) (0xc0029b3c20) Stream added, broadcasting: 1 I0408 00:16:56.256079 7 log.go:172] (0xc002dcc000) Reply frame received for 1 I0408 00:16:56.256121 7 log.go:172] (0xc002dcc000) (0xc001275040) Create stream I0408 00:16:56.256132 7 log.go:172] (0xc002dcc000) (0xc001275040) Stream added, broadcasting: 3 I0408 00:16:56.256900 7 log.go:172] (0xc002dcc000) Reply frame received for 3 I0408 00:16:56.256930 7 log.go:172] (0xc002dcc000) (0xc0029b3d60) Create stream I0408 00:16:56.256940 7 log.go:172] (0xc002dcc000) (0xc0029b3d60) Stream added, broadcasting: 5 I0408 00:16:56.257882 7 log.go:172] (0xc002dcc000) Reply frame received for 5 I0408 00:16:56.312456 7 log.go:172] (0xc002dcc000) Data frame received for 5 I0408 00:16:56.312490 7 log.go:172] (0xc0029b3d60) (5) Data frame handling I0408 00:16:56.312533 7 log.go:172] (0xc002dcc000) Data frame received for 3 I0408 00:16:56.312555 7 log.go:172] (0xc001275040) (3) Data frame handling I0408 00:16:56.312569 7 log.go:172] (0xc001275040) (3) Data frame sent I0408 00:16:56.312618 7 log.go:172] (0xc002dcc000) Data frame received for 3 I0408 00:16:56.312656 7 log.go:172] (0xc001275040) (3) Data frame handling I0408 00:16:56.314190 7 log.go:172] (0xc002dcc000) Data frame received for 1 I0408 00:16:56.314225 7 log.go:172] (0xc0029b3c20) (1) Data frame handling I0408 00:16:56.314271 7 log.go:172] (0xc0029b3c20) (1) Data frame sent I0408 00:16:56.314310 7 log.go:172] (0xc002dcc000) (0xc0029b3c20) Stream removed, broadcasting: 1 I0408 00:16:56.314338 7 log.go:172] (0xc002dcc000) Go away received I0408 00:16:56.314452 7 log.go:172] (0xc002dcc000) (0xc0029b3c20) Stream removed, broadcasting: 1 I0408 00:16:56.314479 7 log.go:172] (0xc002dcc000) (0xc001275040) Stream removed, broadcasting: 3 I0408 00:16:56.314491 7 log.go:172] (0xc002dcc000) (0xc0029b3d60) Stream removed, broadcasting: 5 Apr 8 00:16:56.314: INFO: Exec stderr: "" Apr 8 00:16:56.314: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7322 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 00:16:56.314: INFO: >>> kubeConfig: /root/.kube/config I0408 00:16:56.347910 7 log.go:172] (0xc002d91d90) (0xc001275540) Create stream I0408 00:16:56.347942 7 log.go:172] (0xc002d91d90) (0xc001275540) Stream added, broadcasting: 1 I0408 00:16:56.350622 7 log.go:172] (0xc002d91d90) Reply frame received for 1 I0408 00:16:56.350662 7 log.go:172] (0xc002d91d90) (0xc0012755e0) Create stream I0408 00:16:56.350676 7 log.go:172] (0xc002d91d90) (0xc0012755e0) Stream added, broadcasting: 3 I0408 00:16:56.352035 7 log.go:172] (0xc002d91d90) Reply frame received for 3 I0408 00:16:56.352093 7 log.go:172] (0xc002d91d90) (0xc0011bc280) Create stream I0408 00:16:56.352109 7 log.go:172] (0xc002d91d90) (0xc0011bc280) Stream added, broadcasting: 5 I0408 00:16:56.353004 7 log.go:172] (0xc002d91d90) Reply frame received for 5 I0408 00:16:56.421528 7 log.go:172] (0xc002d91d90) Data frame received for 3 I0408 00:16:56.421562 7 log.go:172] (0xc0012755e0) (3) Data frame handling I0408 00:16:56.421588 7 log.go:172] (0xc0012755e0) (3) Data frame sent I0408 00:16:56.421759 7 log.go:172] (0xc002d91d90) Data frame received for 5 I0408 00:16:56.421791 7 log.go:172] (0xc0011bc280) (5) Data frame handling I0408 00:16:56.421811 7 log.go:172] (0xc002d91d90) Data frame received for 3 I0408 00:16:56.421819 7 log.go:172] (0xc0012755e0) (3) Data frame handling I0408 00:16:56.423182 7 log.go:172] (0xc002d91d90) Data frame received for 1 I0408 00:16:56.423194 7 log.go:172] (0xc001275540) (1) Data frame handling I0408 00:16:56.423212 7 log.go:172] (0xc001275540) (1) Data frame sent I0408 00:16:56.423235 7 log.go:172] (0xc002d91d90) (0xc001275540) Stream removed, broadcasting: 1 I0408 00:16:56.423365 7 log.go:172] (0xc002d91d90) Go away received I0408 00:16:56.423402 7 log.go:172] (0xc002d91d90) (0xc001275540) Stream removed, broadcasting: 1 I0408 00:16:56.423463 7 log.go:172] (0xc002d91d90) (0xc0012755e0) Stream removed, broadcasting: 3 I0408 00:16:56.423492 7 log.go:172] (0xc002d91d90) (0xc0011bc280) Stream removed, broadcasting: 5 Apr 8 00:16:56.423: INFO: Exec stderr: "" Apr 8 00:16:56.423: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7322 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 00:16:56.423: INFO: >>> kubeConfig: /root/.kube/config I0408 00:16:56.460005 7 log.go:172] (0xc006178a50) (0xc001165220) Create stream I0408 00:16:56.460031 7 log.go:172] (0xc006178a50) (0xc001165220) Stream added, broadcasting: 1 I0408 00:16:56.463506 7 log.go:172] (0xc006178a50) Reply frame received for 1 I0408 00:16:56.463559 7 log.go:172] (0xc006178a50) (0xc000306500) Create stream I0408 00:16:56.463575 7 log.go:172] (0xc006178a50) (0xc000306500) Stream added, broadcasting: 3 I0408 00:16:56.464633 7 log.go:172] (0xc006178a50) Reply frame received for 3 I0408 00:16:56.464679 7 log.go:172] (0xc006178a50) (0xc001756780) Create stream I0408 00:16:56.464695 7 log.go:172] (0xc006178a50) (0xc001756780) Stream added, broadcasting: 5 I0408 00:16:56.465987 7 log.go:172] (0xc006178a50) Reply frame received for 5 I0408 00:16:56.526450 7 log.go:172] (0xc006178a50) Data frame received for 5 I0408 00:16:56.526469 7 log.go:172] (0xc001756780) (5) Data frame handling I0408 00:16:56.526511 7 log.go:172] (0xc006178a50) Data frame received for 3 I0408 00:16:56.526544 7 log.go:172] (0xc000306500) (3) Data frame handling I0408 00:16:56.526569 7 log.go:172] (0xc000306500) (3) Data frame sent I0408 00:16:56.526589 7 log.go:172] (0xc006178a50) Data frame received for 3 I0408 00:16:56.526605 7 log.go:172] (0xc000306500) (3) Data frame handling I0408 00:16:56.528116 7 log.go:172] (0xc006178a50) Data frame received for 1 I0408 00:16:56.528133 7 log.go:172] (0xc001165220) (1) Data frame handling I0408 00:16:56.528145 7 log.go:172] (0xc001165220) (1) Data frame sent I0408 00:16:56.528281 7 log.go:172] (0xc006178a50) (0xc001165220) Stream removed, broadcasting: 1 I0408 00:16:56.528388 7 log.go:172] (0xc006178a50) (0xc001165220) Stream removed, broadcasting: 1 I0408 00:16:56.528422 7 log.go:172] (0xc006178a50) (0xc000306500) Stream removed, broadcasting: 3 I0408 00:16:56.528563 7 log.go:172] (0xc006178a50) Go away received I0408 00:16:56.528699 7 log.go:172] (0xc006178a50) (0xc001756780) Stream removed, broadcasting: 5 Apr 8 00:16:56.528: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:16:56.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-7322" for this suite. • [SLOW TEST:11.189 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":137,"skipped":2478,"failed":0} SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:16:56.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override all Apr 8 00:16:56.596: INFO: Waiting up to 5m0s for pod "client-containers-8c10722d-68f1-412d-881b-353bc0a0b054" in namespace "containers-7699" to be "Succeeded or Failed" Apr 8 00:16:56.606: INFO: Pod "client-containers-8c10722d-68f1-412d-881b-353bc0a0b054": Phase="Pending", Reason="", readiness=false. Elapsed: 9.869465ms Apr 8 00:16:58.609: INFO: Pod "client-containers-8c10722d-68f1-412d-881b-353bc0a0b054": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013124209s Apr 8 00:17:00.613: INFO: Pod "client-containers-8c10722d-68f1-412d-881b-353bc0a0b054": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017162411s STEP: Saw pod success Apr 8 00:17:00.613: INFO: Pod "client-containers-8c10722d-68f1-412d-881b-353bc0a0b054" satisfied condition "Succeeded or Failed" Apr 8 00:17:00.616: INFO: Trying to get logs from node latest-worker2 pod client-containers-8c10722d-68f1-412d-881b-353bc0a0b054 container test-container: STEP: delete the pod Apr 8 00:17:00.650: INFO: Waiting for pod client-containers-8c10722d-68f1-412d-881b-353bc0a0b054 to disappear Apr 8 00:17:00.654: INFO: Pod client-containers-8c10722d-68f1-412d-881b-353bc0a0b054 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:17:00.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7699" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":138,"skipped":2481,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:17:00.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 8 00:17:00.765: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5183 /api/v1/namespaces/watch-5183/configmaps/e2e-watch-test-label-changed 203e507a-8eaa-43ad-b344-6d7be5445a03 6277217 0 2020-04-08 00:17:00 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 8 00:17:00.765: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5183 /api/v1/namespaces/watch-5183/configmaps/e2e-watch-test-label-changed 203e507a-8eaa-43ad-b344-6d7be5445a03 6277218 0 2020-04-08 00:17:00 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 8 00:17:00.765: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5183 /api/v1/namespaces/watch-5183/configmaps/e2e-watch-test-label-changed 203e507a-8eaa-43ad-b344-6d7be5445a03 6277220 0 2020-04-08 00:17:00 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 8 00:17:10.838: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5183 /api/v1/namespaces/watch-5183/configmaps/e2e-watch-test-label-changed 203e507a-8eaa-43ad-b344-6d7be5445a03 6277282 0 2020-04-08 00:17:00 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 8 00:17:10.838: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5183 /api/v1/namespaces/watch-5183/configmaps/e2e-watch-test-label-changed 203e507a-8eaa-43ad-b344-6d7be5445a03 6277283 0 2020-04-08 00:17:00 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 8 00:17:10.838: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5183 /api/v1/namespaces/watch-5183/configmaps/e2e-watch-test-label-changed 203e507a-8eaa-43ad-b344-6d7be5445a03 6277284 0 2020-04-08 00:17:00 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:17:10.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5183" for this suite. • [SLOW TEST:10.179 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":139,"skipped":2530,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:17:10.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 8 00:17:10.971: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:17:15.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9481" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":140,"skipped":2540,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:17:15.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 8 00:17:15.584: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 8 00:17:17.592: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721901835, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721901835, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721901835, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721901835, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 8 00:17:20.607: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:17:20.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4740" for this suite. STEP: Destroying namespace "webhook-4740-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.792 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":141,"skipped":2549,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:17:20.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-0443a2ba-22e2-442f-aa71-388c11e7dc2c STEP: Creating configMap with name cm-test-opt-upd-282d1761-b9aa-46be-b5b2-c5a71b1a79d0 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-0443a2ba-22e2-442f-aa71-388c11e7dc2c STEP: Updating configmap cm-test-opt-upd-282d1761-b9aa-46be-b5b2-c5a71b1a79d0 STEP: Creating configMap with name cm-test-opt-create-05953688-7f82-445f-ab8a-fae5c4fc58c1 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:18:41.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2904" for this suite. • [SLOW TEST:80.582 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":142,"skipped":2558,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:18:41.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 8 00:18:41.502: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 8 00:18:44.441: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-167 create -f -' Apr 8 00:18:47.681: INFO: stderr: "" Apr 8 00:18:47.681: INFO: stdout: "e2e-test-crd-publish-openapi-1304-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 8 00:18:47.681: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-167 delete e2e-test-crd-publish-openapi-1304-crds test-cr' Apr 8 00:18:47.810: INFO: stderr: "" Apr 8 00:18:47.810: INFO: stdout: "e2e-test-crd-publish-openapi-1304-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Apr 8 00:18:47.810: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-167 apply -f -' Apr 8 00:18:48.153: INFO: stderr: "" Apr 8 00:18:48.153: INFO: stdout: "e2e-test-crd-publish-openapi-1304-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 8 00:18:48.153: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-167 delete e2e-test-crd-publish-openapi-1304-crds test-cr' Apr 8 00:18:48.266: INFO: stderr: "" Apr 8 00:18:48.266: INFO: stdout: "e2e-test-crd-publish-openapi-1304-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Apr 8 00:18:48.266: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1304-crds' Apr 8 00:18:48.618: INFO: stderr: "" Apr 8 00:18:48.618: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1304-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:18:51.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-167" for this suite. • [SLOW TEST:10.083 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":143,"skipped":2576,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:18:51.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:18:55.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4766" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":144,"skipped":2611,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:18:55.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 8 00:18:58.784: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:18:58.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7438" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":145,"skipped":2619,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:18:58.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1108.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-1108.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1108.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1108.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-1108.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1108.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 8 00:19:05.034: INFO: DNS probes using dns-1108/dns-test-bdf14ef0-dd0c-4099-a854-0c034614aeb1 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:19:05.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1108" for this suite. • [SLOW TEST:6.199 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":146,"skipped":2641,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:19:05.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-1239 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-1239 I0408 00:19:05.479132 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-1239, replica count: 2 I0408 00:19:08.529602 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0408 00:19:11.529856 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 8 00:19:11.529: INFO: Creating new exec pod Apr 8 00:19:16.571: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-1239 execpoddh42q -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 8 00:19:16.795: INFO: stderr: "I0408 00:19:16.701933 1478 log.go:172] (0xc000994000) (0xc0008212c0) Create stream\nI0408 00:19:16.702012 1478 log.go:172] (0xc000994000) (0xc0008212c0) Stream added, broadcasting: 1\nI0408 00:19:16.707403 1478 log.go:172] (0xc000994000) Reply frame received for 1\nI0408 00:19:16.707457 1478 log.go:172] (0xc000994000) (0xc000b88000) Create stream\nI0408 00:19:16.707489 1478 log.go:172] (0xc000994000) (0xc000b88000) Stream added, broadcasting: 3\nI0408 00:19:16.709035 1478 log.go:172] (0xc000994000) Reply frame received for 3\nI0408 00:19:16.709095 1478 log.go:172] (0xc000994000) (0xc0008214a0) Create stream\nI0408 00:19:16.709250 1478 log.go:172] (0xc000994000) (0xc0008214a0) Stream added, broadcasting: 5\nI0408 00:19:16.710897 1478 log.go:172] (0xc000994000) Reply frame received for 5\nI0408 00:19:16.788060 1478 log.go:172] (0xc000994000) Data frame received for 5\nI0408 00:19:16.788094 1478 log.go:172] (0xc0008214a0) (5) Data frame handling\nI0408 00:19:16.788114 1478 log.go:172] (0xc0008214a0) (5) Data frame sent\nI0408 00:19:16.788126 1478 log.go:172] (0xc000994000) Data frame received for 5\nI0408 00:19:16.788136 1478 log.go:172] (0xc0008214a0) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0408 00:19:16.788228 1478 log.go:172] (0xc000994000) Data frame received for 3\nI0408 00:19:16.788259 1478 log.go:172] (0xc000b88000) (3) Data frame handling\nI0408 00:19:16.790241 1478 log.go:172] (0xc000994000) Data frame received for 1\nI0408 00:19:16.790272 1478 log.go:172] (0xc0008212c0) (1) Data frame handling\nI0408 00:19:16.790290 1478 log.go:172] (0xc0008212c0) (1) Data frame sent\nI0408 00:19:16.790400 1478 log.go:172] (0xc000994000) (0xc0008212c0) Stream removed, broadcasting: 1\nI0408 00:19:16.790664 1478 log.go:172] (0xc000994000) Go away received\nI0408 00:19:16.790851 1478 log.go:172] (0xc000994000) (0xc0008212c0) Stream removed, broadcasting: 1\nI0408 00:19:16.790879 1478 log.go:172] (0xc000994000) (0xc000b88000) Stream removed, broadcasting: 3\nI0408 00:19:16.790896 1478 log.go:172] (0xc000994000) (0xc0008214a0) Stream removed, broadcasting: 5\n" Apr 8 00:19:16.795: INFO: stdout: "" Apr 8 00:19:16.796: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-1239 execpoddh42q -- /bin/sh -x -c nc -zv -t -w 2 10.96.26.145 80' Apr 8 00:19:16.999: INFO: stderr: "I0408 00:19:16.930494 1498 log.go:172] (0xc000b19340) (0xc0009a86e0) Create stream\nI0408 00:19:16.930556 1498 log.go:172] (0xc000b19340) (0xc0009a86e0) Stream added, broadcasting: 1\nI0408 00:19:16.935461 1498 log.go:172] (0xc000b19340) Reply frame received for 1\nI0408 00:19:16.935495 1498 log.go:172] (0xc000b19340) (0xc00080d680) Create stream\nI0408 00:19:16.935503 1498 log.go:172] (0xc000b19340) (0xc00080d680) Stream added, broadcasting: 3\nI0408 00:19:16.936535 1498 log.go:172] (0xc000b19340) Reply frame received for 3\nI0408 00:19:16.936566 1498 log.go:172] (0xc000b19340) (0xc00055eaa0) Create stream\nI0408 00:19:16.936576 1498 log.go:172] (0xc000b19340) (0xc00055eaa0) Stream added, broadcasting: 5\nI0408 00:19:16.937570 1498 log.go:172] (0xc000b19340) Reply frame received for 5\nI0408 00:19:16.992430 1498 log.go:172] (0xc000b19340) Data frame received for 3\nI0408 00:19:16.992497 1498 log.go:172] (0xc00080d680) (3) Data frame handling\nI0408 00:19:16.992536 1498 log.go:172] (0xc000b19340) Data frame received for 5\nI0408 00:19:16.992558 1498 log.go:172] (0xc00055eaa0) (5) Data frame handling\nI0408 00:19:16.992587 1498 log.go:172] (0xc00055eaa0) (5) Data frame sent\nI0408 00:19:16.992607 1498 log.go:172] (0xc000b19340) Data frame received for 5\nI0408 00:19:16.992624 1498 log.go:172] (0xc00055eaa0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.26.145 80\nConnection to 10.96.26.145 80 port [tcp/http] succeeded!\nI0408 00:19:16.994247 1498 log.go:172] (0xc000b19340) Data frame received for 1\nI0408 00:19:16.994292 1498 log.go:172] (0xc0009a86e0) (1) Data frame handling\nI0408 00:19:16.994307 1498 log.go:172] (0xc0009a86e0) (1) Data frame sent\nI0408 00:19:16.994320 1498 log.go:172] (0xc000b19340) (0xc0009a86e0) Stream removed, broadcasting: 1\nI0408 00:19:16.994359 1498 log.go:172] (0xc000b19340) Go away received\nI0408 00:19:16.994828 1498 log.go:172] (0xc000b19340) (0xc0009a86e0) Stream removed, broadcasting: 1\nI0408 00:19:16.994849 1498 log.go:172] (0xc000b19340) (0xc00080d680) Stream removed, broadcasting: 3\nI0408 00:19:16.994860 1498 log.go:172] (0xc000b19340) (0xc00055eaa0) Stream removed, broadcasting: 5\n" Apr 8 00:19:16.999: INFO: stdout: "" Apr 8 00:19:16.999: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-1239 execpoddh42q -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30166' Apr 8 00:19:17.196: INFO: stderr: "I0408 00:19:17.119865 1518 log.go:172] (0xc000970dc0) (0xc000954460) Create stream\nI0408 00:19:17.119919 1518 log.go:172] (0xc000970dc0) (0xc000954460) Stream added, broadcasting: 1\nI0408 00:19:17.124695 1518 log.go:172] (0xc000970dc0) Reply frame received for 1\nI0408 00:19:17.124749 1518 log.go:172] (0xc000970dc0) (0xc0006cd680) Create stream\nI0408 00:19:17.124761 1518 log.go:172] (0xc000970dc0) (0xc0006cd680) Stream added, broadcasting: 3\nI0408 00:19:17.126064 1518 log.go:172] (0xc000970dc0) Reply frame received for 3\nI0408 00:19:17.126111 1518 log.go:172] (0xc000970dc0) (0xc000574aa0) Create stream\nI0408 00:19:17.126131 1518 log.go:172] (0xc000970dc0) (0xc000574aa0) Stream added, broadcasting: 5\nI0408 00:19:17.127101 1518 log.go:172] (0xc000970dc0) Reply frame received for 5\nI0408 00:19:17.190208 1518 log.go:172] (0xc000970dc0) Data frame received for 3\nI0408 00:19:17.190259 1518 log.go:172] (0xc0006cd680) (3) Data frame handling\nI0408 00:19:17.190301 1518 log.go:172] (0xc000970dc0) Data frame received for 5\nI0408 00:19:17.190318 1518 log.go:172] (0xc000574aa0) (5) Data frame handling\nI0408 00:19:17.190332 1518 log.go:172] (0xc000574aa0) (5) Data frame sent\nI0408 00:19:17.190364 1518 log.go:172] (0xc000970dc0) Data frame received for 5\nI0408 00:19:17.190373 1518 log.go:172] (0xc000574aa0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 30166\nConnection to 172.17.0.13 30166 port [tcp/30166] succeeded!\nI0408 00:19:17.192001 1518 log.go:172] (0xc000970dc0) Data frame received for 1\nI0408 00:19:17.192019 1518 log.go:172] (0xc000954460) (1) Data frame handling\nI0408 00:19:17.192036 1518 log.go:172] (0xc000954460) (1) Data frame sent\nI0408 00:19:17.192053 1518 log.go:172] (0xc000970dc0) (0xc000954460) Stream removed, broadcasting: 1\nI0408 00:19:17.192126 1518 log.go:172] (0xc000970dc0) Go away received\nI0408 00:19:17.192298 1518 log.go:172] (0xc000970dc0) (0xc000954460) Stream removed, broadcasting: 1\nI0408 00:19:17.192316 1518 log.go:172] (0xc000970dc0) (0xc0006cd680) Stream removed, broadcasting: 3\nI0408 00:19:17.192326 1518 log.go:172] (0xc000970dc0) (0xc000574aa0) Stream removed, broadcasting: 5\n" Apr 8 00:19:17.196: INFO: stdout: "" Apr 8 00:19:17.196: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-1239 execpoddh42q -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30166' Apr 8 00:19:17.393: INFO: stderr: "I0408 00:19:17.316852 1538 log.go:172] (0xc000ab4f20) (0xc000a80320) Create stream\nI0408 00:19:17.316904 1538 log.go:172] (0xc000ab4f20) (0xc000a80320) Stream added, broadcasting: 1\nI0408 00:19:17.321316 1538 log.go:172] (0xc000ab4f20) Reply frame received for 1\nI0408 00:19:17.321353 1538 log.go:172] (0xc000ab4f20) (0xc0005eb680) Create stream\nI0408 00:19:17.321364 1538 log.go:172] (0xc000ab4f20) (0xc0005eb680) Stream added, broadcasting: 3\nI0408 00:19:17.322118 1538 log.go:172] (0xc000ab4f20) Reply frame received for 3\nI0408 00:19:17.322145 1538 log.go:172] (0xc000ab4f20) (0xc00048aaa0) Create stream\nI0408 00:19:17.322152 1538 log.go:172] (0xc000ab4f20) (0xc00048aaa0) Stream added, broadcasting: 5\nI0408 00:19:17.322880 1538 log.go:172] (0xc000ab4f20) Reply frame received for 5\nI0408 00:19:17.386849 1538 log.go:172] (0xc000ab4f20) Data frame received for 3\nI0408 00:19:17.386877 1538 log.go:172] (0xc0005eb680) (3) Data frame handling\nI0408 00:19:17.386901 1538 log.go:172] (0xc000ab4f20) Data frame received for 5\nI0408 00:19:17.386910 1538 log.go:172] (0xc00048aaa0) (5) Data frame handling\nI0408 00:19:17.386920 1538 log.go:172] (0xc00048aaa0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 30166\nConnection to 172.17.0.12 30166 port [tcp/30166] succeeded!\nI0408 00:19:17.387269 1538 log.go:172] (0xc000ab4f20) Data frame received for 5\nI0408 00:19:17.387286 1538 log.go:172] (0xc00048aaa0) (5) Data frame handling\nI0408 00:19:17.389059 1538 log.go:172] (0xc000ab4f20) Data frame received for 1\nI0408 00:19:17.389098 1538 log.go:172] (0xc000a80320) (1) Data frame handling\nI0408 00:19:17.389238 1538 log.go:172] (0xc000a80320) (1) Data frame sent\nI0408 00:19:17.389258 1538 log.go:172] (0xc000ab4f20) (0xc000a80320) Stream removed, broadcasting: 1\nI0408 00:19:17.389316 1538 log.go:172] (0xc000ab4f20) Go away received\nI0408 00:19:17.389616 1538 log.go:172] (0xc000ab4f20) (0xc000a80320) Stream removed, broadcasting: 1\nI0408 00:19:17.389643 1538 log.go:172] (0xc000ab4f20) (0xc0005eb680) Stream removed, broadcasting: 3\nI0408 00:19:17.389656 1538 log.go:172] (0xc000ab4f20) (0xc00048aaa0) Stream removed, broadcasting: 5\n" Apr 8 00:19:17.394: INFO: stdout: "" Apr 8 00:19:17.394: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:19:17.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1239" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:12.347 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":147,"skipped":2660,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:19:17.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 8 00:19:17.484: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:19:21.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-120" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":148,"skipped":2702,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:19:21.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:19:21.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1606" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":149,"skipped":2715,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:19:21.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-41ea4186-bf91-4b15-91fe-7059e6112abe in namespace container-probe-513 Apr 8 00:19:25.866: INFO: Started pod liveness-41ea4186-bf91-4b15-91fe-7059e6112abe in namespace container-probe-513 STEP: checking the pod's current state and verifying that restartCount is present Apr 8 00:19:25.868: INFO: Initial restart count of pod liveness-41ea4186-bf91-4b15-91fe-7059e6112abe is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:23:26.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-513" for this suite. • [SLOW TEST:244.976 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":150,"skipped":2725,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:23:26.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: Gathering metrics W0408 00:23:27.647813 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 8 00:23:27.647: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:23:27.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4564" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":151,"skipped":2749,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:23:27.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 8 00:23:27.715: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:23:34.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-773" for this suite. • [SLOW TEST:6.863 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":275,"completed":152,"skipped":2763,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:23:34.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 8 00:23:34.566: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 8 00:23:38.011: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6057 create -f -' Apr 8 00:23:40.853: INFO: stderr: "" Apr 8 00:23:40.853: INFO: stdout: "e2e-test-crd-publish-openapi-9783-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 8 00:23:40.853: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6057 delete e2e-test-crd-publish-openapi-9783-crds test-cr' Apr 8 00:23:40.986: INFO: stderr: "" Apr 8 00:23:40.986: INFO: stdout: "e2e-test-crd-publish-openapi-9783-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Apr 8 00:23:40.986: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6057 apply -f -' Apr 8 00:23:41.223: INFO: stderr: "" Apr 8 00:23:41.223: INFO: stdout: "e2e-test-crd-publish-openapi-9783-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 8 00:23:41.223: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6057 delete e2e-test-crd-publish-openapi-9783-crds test-cr' Apr 8 00:23:41.338: INFO: stderr: "" Apr 8 00:23:41.338: INFO: stdout: "e2e-test-crd-publish-openapi-9783-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 8 00:23:41.338: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9783-crds' Apr 8 00:23:41.574: INFO: stderr: "" Apr 8 00:23:41.574: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9783-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:23:44.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6057" for this suite. • [SLOW TEST:9.988 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":153,"skipped":2778,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:23:44.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 8 00:23:44.580: INFO: Waiting up to 5m0s for pod "pod-730c776b-7efe-4525-92b6-e3871536e527" in namespace "emptydir-1398" to be "Succeeded or Failed" Apr 8 00:23:44.591: INFO: Pod "pod-730c776b-7efe-4525-92b6-e3871536e527": Phase="Pending", Reason="", readiness=false. Elapsed: 10.361427ms Apr 8 00:23:46.609: INFO: Pod "pod-730c776b-7efe-4525-92b6-e3871536e527": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028738662s Apr 8 00:23:48.613: INFO: Pod "pod-730c776b-7efe-4525-92b6-e3871536e527": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03309692s STEP: Saw pod success Apr 8 00:23:48.613: INFO: Pod "pod-730c776b-7efe-4525-92b6-e3871536e527" satisfied condition "Succeeded or Failed" Apr 8 00:23:48.617: INFO: Trying to get logs from node latest-worker2 pod pod-730c776b-7efe-4525-92b6-e3871536e527 container test-container: STEP: delete the pod Apr 8 00:23:48.646: INFO: Waiting for pod pod-730c776b-7efe-4525-92b6-e3871536e527 to disappear Apr 8 00:23:48.651: INFO: Pod pod-730c776b-7efe-4525-92b6-e3871536e527 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:23:48.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1398" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":154,"skipped":2802,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:23:48.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 8 00:23:48.737: INFO: Waiting up to 5m0s for pod "pod-e685e443-5436-4958-8d70-66332f54ae62" in namespace "emptydir-6625" to be "Succeeded or Failed" Apr 8 00:23:48.741: INFO: Pod "pod-e685e443-5436-4958-8d70-66332f54ae62": Phase="Pending", Reason="", readiness=false. Elapsed: 4.003662ms Apr 8 00:23:50.744: INFO: Pod "pod-e685e443-5436-4958-8d70-66332f54ae62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007410673s Apr 8 00:23:52.748: INFO: Pod "pod-e685e443-5436-4958-8d70-66332f54ae62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011150148s STEP: Saw pod success Apr 8 00:23:52.748: INFO: Pod "pod-e685e443-5436-4958-8d70-66332f54ae62" satisfied condition "Succeeded or Failed" Apr 8 00:23:52.750: INFO: Trying to get logs from node latest-worker pod pod-e685e443-5436-4958-8d70-66332f54ae62 container test-container: STEP: delete the pod Apr 8 00:23:52.815: INFO: Waiting for pod pod-e685e443-5436-4958-8d70-66332f54ae62 to disappear Apr 8 00:23:52.818: INFO: Pod pod-e685e443-5436-4958-8d70-66332f54ae62 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:23:52.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6625" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":155,"skipped":2806,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:23:52.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 8 00:23:52.905: INFO: Waiting up to 5m0s for pod "pod-e45fb480-9edd-4d60-9384-2942411c7950" in namespace "emptydir-5485" to be "Succeeded or Failed" Apr 8 00:23:52.908: INFO: Pod "pod-e45fb480-9edd-4d60-9384-2942411c7950": Phase="Pending", Reason="", readiness=false. Elapsed: 3.053418ms Apr 8 00:23:54.912: INFO: Pod "pod-e45fb480-9edd-4d60-9384-2942411c7950": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006388532s Apr 8 00:23:56.916: INFO: Pod "pod-e45fb480-9edd-4d60-9384-2942411c7950": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010473525s STEP: Saw pod success Apr 8 00:23:56.916: INFO: Pod "pod-e45fb480-9edd-4d60-9384-2942411c7950" satisfied condition "Succeeded or Failed" Apr 8 00:23:56.919: INFO: Trying to get logs from node latest-worker2 pod pod-e45fb480-9edd-4d60-9384-2942411c7950 container test-container: STEP: delete the pod Apr 8 00:23:56.994: INFO: Waiting for pod pod-e45fb480-9edd-4d60-9384-2942411c7950 to disappear Apr 8 00:23:56.999: INFO: Pod pod-e45fb480-9edd-4d60-9384-2942411c7950 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:23:56.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5485" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":156,"skipped":2883,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:23:57.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 8 00:23:57.988: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 8 00:24:00.016: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721902237, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721902237, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721902238, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721902237, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 8 00:24:03.057: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Apr 8 00:24:03.080: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:24:03.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-996" for this suite. STEP: Destroying namespace "webhook-996-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.203 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":157,"skipped":2884,"failed":0} SS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:24:03.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1552.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1552.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1552.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1552.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1552.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1552.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1552.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1552.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1552.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1552.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1552.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 83.217.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.217.83_udp@PTR;check="$$(dig +tcp +noall +answer +search 83.217.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.217.83_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1552.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1552.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1552.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1552.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1552.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1552.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1552.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1552.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1552.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1552.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1552.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 83.217.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.217.83_udp@PTR;check="$$(dig +tcp +noall +answer +search 83.217.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.217.83_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 8 00:24:09.353: INFO: Unable to read wheezy_udp@dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:09.357: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:09.360: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:09.362: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:09.383: INFO: Unable to read jessie_udp@dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:09.386: INFO: Unable to read jessie_tcp@dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:09.389: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:09.392: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:09.410: INFO: Lookups using dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577 failed for: [wheezy_udp@dns-test-service.dns-1552.svc.cluster.local wheezy_tcp@dns-test-service.dns-1552.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local jessie_udp@dns-test-service.dns-1552.svc.cluster.local jessie_tcp@dns-test-service.dns-1552.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local] Apr 8 00:24:14.414: INFO: Unable to read wheezy_udp@dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:14.417: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:14.420: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:14.423: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:14.444: INFO: Unable to read jessie_udp@dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:14.447: INFO: Unable to read jessie_tcp@dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:14.451: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:14.454: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:14.474: INFO: Lookups using dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577 failed for: [wheezy_udp@dns-test-service.dns-1552.svc.cluster.local wheezy_tcp@dns-test-service.dns-1552.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local jessie_udp@dns-test-service.dns-1552.svc.cluster.local jessie_tcp@dns-test-service.dns-1552.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local] Apr 8 00:24:19.416: INFO: Unable to read wheezy_udp@dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:19.419: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:19.422: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:19.426: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:19.446: INFO: Unable to read jessie_udp@dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:19.449: INFO: Unable to read jessie_tcp@dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:19.452: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:19.455: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:19.472: INFO: Lookups using dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577 failed for: [wheezy_udp@dns-test-service.dns-1552.svc.cluster.local wheezy_tcp@dns-test-service.dns-1552.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local jessie_udp@dns-test-service.dns-1552.svc.cluster.local jessie_tcp@dns-test-service.dns-1552.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local] Apr 8 00:24:24.416: INFO: Unable to read wheezy_udp@dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:24.419: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:24.423: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:24.431: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:24.477: INFO: Unable to read jessie_udp@dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:24.480: INFO: Unable to read jessie_tcp@dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:24.481: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:24.483: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:24.496: INFO: Lookups using dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577 failed for: [wheezy_udp@dns-test-service.dns-1552.svc.cluster.local wheezy_tcp@dns-test-service.dns-1552.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local jessie_udp@dns-test-service.dns-1552.svc.cluster.local jessie_tcp@dns-test-service.dns-1552.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local] Apr 8 00:24:29.419: INFO: Unable to read wheezy_udp@dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:29.422: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:29.426: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:29.428: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:29.448: INFO: Unable to read jessie_udp@dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:29.472: INFO: Unable to read jessie_tcp@dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:29.485: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:29.505: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:29.521: INFO: Lookups using dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577 failed for: [wheezy_udp@dns-test-service.dns-1552.svc.cluster.local wheezy_tcp@dns-test-service.dns-1552.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local jessie_udp@dns-test-service.dns-1552.svc.cluster.local jessie_tcp@dns-test-service.dns-1552.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local] Apr 8 00:24:34.415: INFO: Unable to read wheezy_udp@dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:34.419: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:34.422: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:34.426: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:34.447: INFO: Unable to read jessie_udp@dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:34.450: INFO: Unable to read jessie_tcp@dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:34.453: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:34.456: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local from pod dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577: the server could not find the requested resource (get pods dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577) Apr 8 00:24:34.472: INFO: Lookups using dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577 failed for: [wheezy_udp@dns-test-service.dns-1552.svc.cluster.local wheezy_tcp@dns-test-service.dns-1552.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local jessie_udp@dns-test-service.dns-1552.svc.cluster.local jessie_tcp@dns-test-service.dns-1552.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1552.svc.cluster.local] Apr 8 00:24:39.474: INFO: DNS probes using dns-1552/dns-test-e8c7b161-5ef9-47c5-bf06-9e6a82c0e577 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:24:40.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1552" for this suite. • [SLOW TEST:36.884 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":275,"completed":158,"skipped":2886,"failed":0} [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:24:40.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-4712 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-4712 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4712 Apr 8 00:24:40.179: INFO: Found 0 stateful pods, waiting for 1 Apr 8 00:24:50.186: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 8 00:24:50.189: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4712 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 8 00:24:50.478: INFO: stderr: "I0408 00:24:50.336387 1679 log.go:172] (0xc000a42630) (0xc000b600a0) Create stream\nI0408 00:24:50.336462 1679 log.go:172] (0xc000a42630) (0xc000b600a0) Stream added, broadcasting: 1\nI0408 00:24:50.345650 1679 log.go:172] (0xc000a42630) Reply frame received for 1\nI0408 00:24:50.345697 1679 log.go:172] (0xc000a42630) (0xc0006592c0) Create stream\nI0408 00:24:50.345710 1679 log.go:172] (0xc000a42630) (0xc0006592c0) Stream added, broadcasting: 3\nI0408 00:24:50.347325 1679 log.go:172] (0xc000a42630) Reply frame received for 3\nI0408 00:24:50.347386 1679 log.go:172] (0xc000a42630) (0xc000b60140) Create stream\nI0408 00:24:50.347419 1679 log.go:172] (0xc000a42630) (0xc000b60140) Stream added, broadcasting: 5\nI0408 00:24:50.349755 1679 log.go:172] (0xc000a42630) Reply frame received for 5\nI0408 00:24:50.443990 1679 log.go:172] (0xc000a42630) Data frame received for 5\nI0408 00:24:50.444026 1679 log.go:172] (0xc000b60140) (5) Data frame handling\nI0408 00:24:50.444047 1679 log.go:172] (0xc000b60140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0408 00:24:50.471522 1679 log.go:172] (0xc000a42630) Data frame received for 5\nI0408 00:24:50.471679 1679 log.go:172] (0xc000b60140) (5) Data frame handling\nI0408 00:24:50.471813 1679 log.go:172] (0xc000a42630) Data frame received for 3\nI0408 00:24:50.471852 1679 log.go:172] (0xc0006592c0) (3) Data frame handling\nI0408 00:24:50.471882 1679 log.go:172] (0xc0006592c0) (3) Data frame sent\nI0408 00:24:50.471905 1679 log.go:172] (0xc000a42630) Data frame received for 3\nI0408 00:24:50.471917 1679 log.go:172] (0xc0006592c0) (3) Data frame handling\nI0408 00:24:50.473051 1679 log.go:172] (0xc000a42630) Data frame received for 1\nI0408 00:24:50.473067 1679 log.go:172] (0xc000b600a0) (1) Data frame handling\nI0408 00:24:50.473084 1679 log.go:172] (0xc000b600a0) (1) Data frame sent\nI0408 00:24:50.473097 1679 log.go:172] (0xc000a42630) (0xc000b600a0) Stream removed, broadcasting: 1\nI0408 00:24:50.473404 1679 log.go:172] (0xc000a42630) (0xc000b600a0) Stream removed, broadcasting: 1\nI0408 00:24:50.473427 1679 log.go:172] (0xc000a42630) (0xc0006592c0) Stream removed, broadcasting: 3\nI0408 00:24:50.473579 1679 log.go:172] (0xc000a42630) Go away received\nI0408 00:24:50.473644 1679 log.go:172] (0xc000a42630) (0xc000b60140) Stream removed, broadcasting: 5\n" Apr 8 00:24:50.478: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 8 00:24:50.478: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 8 00:24:50.484: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 8 00:25:00.489: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 8 00:25:00.489: INFO: Waiting for statefulset status.replicas updated to 0 Apr 8 00:25:00.509: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999241s Apr 8 00:25:01.514: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.99114326s Apr 8 00:25:02.518: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.986771067s Apr 8 00:25:03.523: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.981871124s Apr 8 00:25:04.527: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.977603728s Apr 8 00:25:05.532: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.9731317s Apr 8 00:25:06.536: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.968530596s Apr 8 00:25:07.541: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.964029082s Apr 8 00:25:08.546: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.959386147s Apr 8 00:25:09.550: INFO: Verifying statefulset ss doesn't scale past 1 for another 954.770609ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4712 Apr 8 00:25:10.555: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4712 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 00:25:10.777: INFO: stderr: "I0408 00:25:10.681814 1700 log.go:172] (0xc0007e40b0) (0xc0007aa280) Create stream\nI0408 00:25:10.681869 1700 log.go:172] (0xc0007e40b0) (0xc0007aa280) Stream added, broadcasting: 1\nI0408 00:25:10.684538 1700 log.go:172] (0xc0007e40b0) Reply frame received for 1\nI0408 00:25:10.684621 1700 log.go:172] (0xc0007e40b0) (0xc00029f2c0) Create stream\nI0408 00:25:10.684654 1700 log.go:172] (0xc0007e40b0) (0xc00029f2c0) Stream added, broadcasting: 3\nI0408 00:25:10.685903 1700 log.go:172] (0xc0007e40b0) Reply frame received for 3\nI0408 00:25:10.685943 1700 log.go:172] (0xc0007e40b0) (0xc0007aa320) Create stream\nI0408 00:25:10.685955 1700 log.go:172] (0xc0007e40b0) (0xc0007aa320) Stream added, broadcasting: 5\nI0408 00:25:10.687127 1700 log.go:172] (0xc0007e40b0) Reply frame received for 5\nI0408 00:25:10.770080 1700 log.go:172] (0xc0007e40b0) Data frame received for 3\nI0408 00:25:10.770136 1700 log.go:172] (0xc00029f2c0) (3) Data frame handling\nI0408 00:25:10.770157 1700 log.go:172] (0xc00029f2c0) (3) Data frame sent\nI0408 00:25:10.770173 1700 log.go:172] (0xc0007e40b0) Data frame received for 3\nI0408 00:25:10.770186 1700 log.go:172] (0xc00029f2c0) (3) Data frame handling\nI0408 00:25:10.770263 1700 log.go:172] (0xc0007e40b0) Data frame received for 5\nI0408 00:25:10.770298 1700 log.go:172] (0xc0007aa320) (5) Data frame handling\nI0408 00:25:10.770332 1700 log.go:172] (0xc0007aa320) (5) Data frame sent\nI0408 00:25:10.770361 1700 log.go:172] (0xc0007e40b0) Data frame received for 5\nI0408 00:25:10.770383 1700 log.go:172] (0xc0007aa320) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0408 00:25:10.771805 1700 log.go:172] (0xc0007e40b0) Data frame received for 1\nI0408 00:25:10.771835 1700 log.go:172] (0xc0007aa280) (1) Data frame handling\nI0408 00:25:10.771851 1700 log.go:172] (0xc0007aa280) (1) Data frame sent\nI0408 00:25:10.771870 1700 log.go:172] (0xc0007e40b0) (0xc0007aa280) Stream removed, broadcasting: 1\nI0408 00:25:10.771900 1700 log.go:172] (0xc0007e40b0) Go away received\nI0408 00:25:10.772302 1700 log.go:172] (0xc0007e40b0) (0xc0007aa280) Stream removed, broadcasting: 1\nI0408 00:25:10.772330 1700 log.go:172] (0xc0007e40b0) (0xc00029f2c0) Stream removed, broadcasting: 3\nI0408 00:25:10.772343 1700 log.go:172] (0xc0007e40b0) (0xc0007aa320) Stream removed, broadcasting: 5\n" Apr 8 00:25:10.777: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 8 00:25:10.777: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 8 00:25:10.780: INFO: Found 1 stateful pods, waiting for 3 Apr 8 00:25:20.785: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 8 00:25:20.785: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 8 00:25:20.785: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 8 00:25:20.791: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4712 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 8 00:25:21.011: INFO: stderr: "I0408 00:25:20.914396 1722 log.go:172] (0xc000b0cfd0) (0xc000a786e0) Create stream\nI0408 00:25:20.914527 1722 log.go:172] (0xc000b0cfd0) (0xc000a786e0) Stream added, broadcasting: 1\nI0408 00:25:20.919046 1722 log.go:172] (0xc000b0cfd0) Reply frame received for 1\nI0408 00:25:20.919100 1722 log.go:172] (0xc000b0cfd0) (0xc00062f5e0) Create stream\nI0408 00:25:20.919125 1722 log.go:172] (0xc000b0cfd0) (0xc00062f5e0) Stream added, broadcasting: 3\nI0408 00:25:20.920199 1722 log.go:172] (0xc000b0cfd0) Reply frame received for 3\nI0408 00:25:20.920246 1722 log.go:172] (0xc000b0cfd0) (0xc0003fea00) Create stream\nI0408 00:25:20.920261 1722 log.go:172] (0xc000b0cfd0) (0xc0003fea00) Stream added, broadcasting: 5\nI0408 00:25:20.921450 1722 log.go:172] (0xc000b0cfd0) Reply frame received for 5\nI0408 00:25:21.004241 1722 log.go:172] (0xc000b0cfd0) Data frame received for 3\nI0408 00:25:21.004275 1722 log.go:172] (0xc00062f5e0) (3) Data frame handling\nI0408 00:25:21.004299 1722 log.go:172] (0xc00062f5e0) (3) Data frame sent\nI0408 00:25:21.004556 1722 log.go:172] (0xc000b0cfd0) Data frame received for 5\nI0408 00:25:21.004601 1722 log.go:172] (0xc000b0cfd0) Data frame received for 3\nI0408 00:25:21.004642 1722 log.go:172] (0xc00062f5e0) (3) Data frame handling\nI0408 00:25:21.004673 1722 log.go:172] (0xc0003fea00) (5) Data frame handling\nI0408 00:25:21.004685 1722 log.go:172] (0xc0003fea00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0408 00:25:21.004787 1722 log.go:172] (0xc000b0cfd0) Data frame received for 5\nI0408 00:25:21.004803 1722 log.go:172] (0xc0003fea00) (5) Data frame handling\nI0408 00:25:21.006158 1722 log.go:172] (0xc000b0cfd0) Data frame received for 1\nI0408 00:25:21.006179 1722 log.go:172] (0xc000a786e0) (1) Data frame handling\nI0408 00:25:21.006194 1722 log.go:172] (0xc000a786e0) (1) Data frame sent\nI0408 00:25:21.006209 1722 log.go:172] (0xc000b0cfd0) (0xc000a786e0) Stream removed, broadcasting: 1\nI0408 00:25:21.006224 1722 log.go:172] (0xc000b0cfd0) Go away received\nI0408 00:25:21.006558 1722 log.go:172] (0xc000b0cfd0) (0xc000a786e0) Stream removed, broadcasting: 1\nI0408 00:25:21.006575 1722 log.go:172] (0xc000b0cfd0) (0xc00062f5e0) Stream removed, broadcasting: 3\nI0408 00:25:21.006585 1722 log.go:172] (0xc000b0cfd0) (0xc0003fea00) Stream removed, broadcasting: 5\n" Apr 8 00:25:21.011: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 8 00:25:21.011: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 8 00:25:21.011: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4712 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 8 00:25:21.269: INFO: stderr: "I0408 00:25:21.160142 1742 log.go:172] (0xc000a471e0) (0xc000ab23c0) Create stream\nI0408 00:25:21.160217 1742 log.go:172] (0xc000a471e0) (0xc000ab23c0) Stream added, broadcasting: 1\nI0408 00:25:21.163811 1742 log.go:172] (0xc000a471e0) Reply frame received for 1\nI0408 00:25:21.163861 1742 log.go:172] (0xc000a471e0) (0xc000aec140) Create stream\nI0408 00:25:21.163874 1742 log.go:172] (0xc000a471e0) (0xc000aec140) Stream added, broadcasting: 3\nI0408 00:25:21.165093 1742 log.go:172] (0xc000a471e0) Reply frame received for 3\nI0408 00:25:21.165277 1742 log.go:172] (0xc000a471e0) (0xc000aa60a0) Create stream\nI0408 00:25:21.165296 1742 log.go:172] (0xc000a471e0) (0xc000aa60a0) Stream added, broadcasting: 5\nI0408 00:25:21.166358 1742 log.go:172] (0xc000a471e0) Reply frame received for 5\nI0408 00:25:21.227646 1742 log.go:172] (0xc000a471e0) Data frame received for 5\nI0408 00:25:21.227710 1742 log.go:172] (0xc000aa60a0) (5) Data frame handling\nI0408 00:25:21.227743 1742 log.go:172] (0xc000aa60a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0408 00:25:21.261747 1742 log.go:172] (0xc000a471e0) Data frame received for 3\nI0408 00:25:21.261791 1742 log.go:172] (0xc000aec140) (3) Data frame handling\nI0408 00:25:21.261814 1742 log.go:172] (0xc000aec140) (3) Data frame sent\nI0408 00:25:21.262253 1742 log.go:172] (0xc000a471e0) Data frame received for 5\nI0408 00:25:21.262299 1742 log.go:172] (0xc000aa60a0) (5) Data frame handling\nI0408 00:25:21.262380 1742 log.go:172] (0xc000a471e0) Data frame received for 3\nI0408 00:25:21.262397 1742 log.go:172] (0xc000aec140) (3) Data frame handling\nI0408 00:25:21.264515 1742 log.go:172] (0xc000a471e0) Data frame received for 1\nI0408 00:25:21.264541 1742 log.go:172] (0xc000ab23c0) (1) Data frame handling\nI0408 00:25:21.264566 1742 log.go:172] (0xc000ab23c0) (1) Data frame sent\nI0408 00:25:21.264601 1742 log.go:172] (0xc000a471e0) (0xc000ab23c0) Stream removed, broadcasting: 1\nI0408 00:25:21.264625 1742 log.go:172] (0xc000a471e0) Go away received\nI0408 00:25:21.265028 1742 log.go:172] (0xc000a471e0) (0xc000ab23c0) Stream removed, broadcasting: 1\nI0408 00:25:21.265050 1742 log.go:172] (0xc000a471e0) (0xc000aec140) Stream removed, broadcasting: 3\nI0408 00:25:21.265061 1742 log.go:172] (0xc000a471e0) (0xc000aa60a0) Stream removed, broadcasting: 5\n" Apr 8 00:25:21.269: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 8 00:25:21.270: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 8 00:25:21.270: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4712 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 8 00:25:21.520: INFO: stderr: "I0408 00:25:21.402866 1762 log.go:172] (0xc000a52000) (0xc000968000) Create stream\nI0408 00:25:21.402952 1762 log.go:172] (0xc000a52000) (0xc000968000) Stream added, broadcasting: 1\nI0408 00:25:21.405799 1762 log.go:172] (0xc000a52000) Reply frame received for 1\nI0408 00:25:21.405856 1762 log.go:172] (0xc000a52000) (0xc0009680a0) Create stream\nI0408 00:25:21.405883 1762 log.go:172] (0xc000a52000) (0xc0009680a0) Stream added, broadcasting: 3\nI0408 00:25:21.406702 1762 log.go:172] (0xc000a52000) Reply frame received for 3\nI0408 00:25:21.406741 1762 log.go:172] (0xc000a52000) (0xc00060c000) Create stream\nI0408 00:25:21.406749 1762 log.go:172] (0xc000a52000) (0xc00060c000) Stream added, broadcasting: 5\nI0408 00:25:21.407641 1762 log.go:172] (0xc000a52000) Reply frame received for 5\nI0408 00:25:21.472552 1762 log.go:172] (0xc000a52000) Data frame received for 5\nI0408 00:25:21.472599 1762 log.go:172] (0xc00060c000) (5) Data frame handling\nI0408 00:25:21.472633 1762 log.go:172] (0xc00060c000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0408 00:25:21.513655 1762 log.go:172] (0xc000a52000) Data frame received for 3\nI0408 00:25:21.513679 1762 log.go:172] (0xc0009680a0) (3) Data frame handling\nI0408 00:25:21.513694 1762 log.go:172] (0xc0009680a0) (3) Data frame sent\nI0408 00:25:21.513700 1762 log.go:172] (0xc000a52000) Data frame received for 3\nI0408 00:25:21.513707 1762 log.go:172] (0xc0009680a0) (3) Data frame handling\nI0408 00:25:21.513797 1762 log.go:172] (0xc000a52000) Data frame received for 5\nI0408 00:25:21.513823 1762 log.go:172] (0xc00060c000) (5) Data frame handling\nI0408 00:25:21.515300 1762 log.go:172] (0xc000a52000) Data frame received for 1\nI0408 00:25:21.515326 1762 log.go:172] (0xc000968000) (1) Data frame handling\nI0408 00:25:21.515358 1762 log.go:172] (0xc000968000) (1) Data frame sent\nI0408 00:25:21.515387 1762 log.go:172] (0xc000a52000) (0xc000968000) Stream removed, broadcasting: 1\nI0408 00:25:21.515404 1762 log.go:172] (0xc000a52000) Go away received\nI0408 00:25:21.515768 1762 log.go:172] (0xc000a52000) (0xc000968000) Stream removed, broadcasting: 1\nI0408 00:25:21.515794 1762 log.go:172] (0xc000a52000) (0xc0009680a0) Stream removed, broadcasting: 3\nI0408 00:25:21.515803 1762 log.go:172] (0xc000a52000) (0xc00060c000) Stream removed, broadcasting: 5\n" Apr 8 00:25:21.520: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 8 00:25:21.520: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 8 00:25:21.520: INFO: Waiting for statefulset status.replicas updated to 0 Apr 8 00:25:21.523: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 8 00:25:31.529: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 8 00:25:31.529: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 8 00:25:31.529: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 8 00:25:31.540: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.9999996s Apr 8 00:25:32.545: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.994013209s Apr 8 00:25:33.550: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.988691967s Apr 8 00:25:34.555: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.983564398s Apr 8 00:25:35.560: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.978398228s Apr 8 00:25:36.565: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.973242991s Apr 8 00:25:37.570: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.968202312s Apr 8 00:25:38.575: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.963154995s Apr 8 00:25:39.580: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.958520319s Apr 8 00:25:40.585: INFO: Verifying statefulset ss doesn't scale past 3 for another 953.718624ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4712 Apr 8 00:25:41.606: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4712 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 00:25:41.838: INFO: stderr: "I0408 00:25:41.750607 1783 log.go:172] (0xc00060ca50) (0xc000608280) Create stream\nI0408 00:25:41.750657 1783 log.go:172] (0xc00060ca50) (0xc000608280) Stream added, broadcasting: 1\nI0408 00:25:41.753259 1783 log.go:172] (0xc00060ca50) Reply frame received for 1\nI0408 00:25:41.753315 1783 log.go:172] (0xc00060ca50) (0xc000827180) Create stream\nI0408 00:25:41.753333 1783 log.go:172] (0xc00060ca50) (0xc000827180) Stream added, broadcasting: 3\nI0408 00:25:41.754600 1783 log.go:172] (0xc00060ca50) Reply frame received for 3\nI0408 00:25:41.754628 1783 log.go:172] (0xc00060ca50) (0xc000608320) Create stream\nI0408 00:25:41.754638 1783 log.go:172] (0xc00060ca50) (0xc000608320) Stream added, broadcasting: 5\nI0408 00:25:41.755738 1783 log.go:172] (0xc00060ca50) Reply frame received for 5\nI0408 00:25:41.831265 1783 log.go:172] (0xc00060ca50) Data frame received for 5\nI0408 00:25:41.831307 1783 log.go:172] (0xc000608320) (5) Data frame handling\nI0408 00:25:41.831325 1783 log.go:172] (0xc000608320) (5) Data frame sent\nI0408 00:25:41.831338 1783 log.go:172] (0xc00060ca50) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0408 00:25:41.831359 1783 log.go:172] (0xc00060ca50) Data frame received for 3\nI0408 00:25:41.831408 1783 log.go:172] (0xc000827180) (3) Data frame handling\nI0408 00:25:41.831449 1783 log.go:172] (0xc000827180) (3) Data frame sent\nI0408 00:25:41.831465 1783 log.go:172] (0xc00060ca50) Data frame received for 3\nI0408 00:25:41.831477 1783 log.go:172] (0xc000827180) (3) Data frame handling\nI0408 00:25:41.831496 1783 log.go:172] (0xc000608320) (5) Data frame handling\nI0408 00:25:41.833453 1783 log.go:172] (0xc00060ca50) Data frame received for 1\nI0408 00:25:41.833476 1783 log.go:172] (0xc000608280) (1) Data frame handling\nI0408 00:25:41.833492 1783 log.go:172] (0xc000608280) (1) Data frame sent\nI0408 00:25:41.833526 1783 log.go:172] (0xc00060ca50) (0xc000608280) Stream removed, broadcasting: 1\nI0408 00:25:41.833557 1783 log.go:172] (0xc00060ca50) Go away received\nI0408 00:25:41.833910 1783 log.go:172] (0xc00060ca50) (0xc000608280) Stream removed, broadcasting: 1\nI0408 00:25:41.833935 1783 log.go:172] (0xc00060ca50) (0xc000827180) Stream removed, broadcasting: 3\nI0408 00:25:41.833948 1783 log.go:172] (0xc00060ca50) (0xc000608320) Stream removed, broadcasting: 5\n" Apr 8 00:25:41.838: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 8 00:25:41.838: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 8 00:25:41.838: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4712 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 00:25:42.019: INFO: stderr: "I0408 00:25:41.947320 1807 log.go:172] (0xc0003dc160) (0xc0003a1f40) Create stream\nI0408 00:25:41.947365 1807 log.go:172] (0xc0003dc160) (0xc0003a1f40) Stream added, broadcasting: 1\nI0408 00:25:41.949794 1807 log.go:172] (0xc0003dc160) Reply frame received for 1\nI0408 00:25:41.949827 1807 log.go:172] (0xc0003dc160) (0xc000613400) Create stream\nI0408 00:25:41.949835 1807 log.go:172] (0xc0003dc160) (0xc000613400) Stream added, broadcasting: 3\nI0408 00:25:41.950591 1807 log.go:172] (0xc0003dc160) Reply frame received for 3\nI0408 00:25:41.950623 1807 log.go:172] (0xc0003dc160) (0xc0008a4000) Create stream\nI0408 00:25:41.950635 1807 log.go:172] (0xc0003dc160) (0xc0008a4000) Stream added, broadcasting: 5\nI0408 00:25:41.951473 1807 log.go:172] (0xc0003dc160) Reply frame received for 5\nI0408 00:25:42.012688 1807 log.go:172] (0xc0003dc160) Data frame received for 5\nI0408 00:25:42.012736 1807 log.go:172] (0xc0008a4000) (5) Data frame handling\nI0408 00:25:42.012771 1807 log.go:172] (0xc0008a4000) (5) Data frame sent\nI0408 00:25:42.012791 1807 log.go:172] (0xc0003dc160) Data frame received for 5\nI0408 00:25:42.012805 1807 log.go:172] (0xc0008a4000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0408 00:25:42.012859 1807 log.go:172] (0xc0003dc160) Data frame received for 3\nI0408 00:25:42.012873 1807 log.go:172] (0xc000613400) (3) Data frame handling\nI0408 00:25:42.012886 1807 log.go:172] (0xc000613400) (3) Data frame sent\nI0408 00:25:42.012895 1807 log.go:172] (0xc0003dc160) Data frame received for 3\nI0408 00:25:42.012911 1807 log.go:172] (0xc000613400) (3) Data frame handling\nI0408 00:25:42.014862 1807 log.go:172] (0xc0003dc160) Data frame received for 1\nI0408 00:25:42.014903 1807 log.go:172] (0xc0003a1f40) (1) Data frame handling\nI0408 00:25:42.014925 1807 log.go:172] (0xc0003a1f40) (1) Data frame sent\nI0408 00:25:42.014950 1807 log.go:172] (0xc0003dc160) (0xc0003a1f40) Stream removed, broadcasting: 1\nI0408 00:25:42.014974 1807 log.go:172] (0xc0003dc160) Go away received\nI0408 00:25:42.015415 1807 log.go:172] (0xc0003dc160) (0xc0003a1f40) Stream removed, broadcasting: 1\nI0408 00:25:42.015454 1807 log.go:172] (0xc0003dc160) (0xc000613400) Stream removed, broadcasting: 3\nI0408 00:25:42.015496 1807 log.go:172] (0xc0003dc160) (0xc0008a4000) Stream removed, broadcasting: 5\n" Apr 8 00:25:42.020: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 8 00:25:42.020: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 8 00:25:42.020: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4712 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 00:25:42.238: INFO: stderr: "I0408 00:25:42.144784 1827 log.go:172] (0xc00053e160) (0xc0008f4140) Create stream\nI0408 00:25:42.144861 1827 log.go:172] (0xc00053e160) (0xc0008f4140) Stream added, broadcasting: 1\nI0408 00:25:42.148410 1827 log.go:172] (0xc00053e160) Reply frame received for 1\nI0408 00:25:42.148448 1827 log.go:172] (0xc00053e160) (0xc0008f8a00) Create stream\nI0408 00:25:42.148465 1827 log.go:172] (0xc00053e160) (0xc0008f8a00) Stream added, broadcasting: 3\nI0408 00:25:42.149579 1827 log.go:172] (0xc00053e160) Reply frame received for 3\nI0408 00:25:42.149621 1827 log.go:172] (0xc00053e160) (0xc000312140) Create stream\nI0408 00:25:42.149633 1827 log.go:172] (0xc00053e160) (0xc000312140) Stream added, broadcasting: 5\nI0408 00:25:42.150670 1827 log.go:172] (0xc00053e160) Reply frame received for 5\nI0408 00:25:42.231178 1827 log.go:172] (0xc00053e160) Data frame received for 5\nI0408 00:25:42.231215 1827 log.go:172] (0xc000312140) (5) Data frame handling\nI0408 00:25:42.231231 1827 log.go:172] (0xc000312140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0408 00:25:42.231247 1827 log.go:172] (0xc00053e160) Data frame received for 3\nI0408 00:25:42.231252 1827 log.go:172] (0xc0008f8a00) (3) Data frame handling\nI0408 00:25:42.231259 1827 log.go:172] (0xc0008f8a00) (3) Data frame sent\nI0408 00:25:42.231265 1827 log.go:172] (0xc00053e160) Data frame received for 3\nI0408 00:25:42.231270 1827 log.go:172] (0xc0008f8a00) (3) Data frame handling\nI0408 00:25:42.231290 1827 log.go:172] (0xc00053e160) Data frame received for 5\nI0408 00:25:42.231303 1827 log.go:172] (0xc000312140) (5) Data frame handling\nI0408 00:25:42.232756 1827 log.go:172] (0xc00053e160) Data frame received for 1\nI0408 00:25:42.232767 1827 log.go:172] (0xc0008f4140) (1) Data frame handling\nI0408 00:25:42.232773 1827 log.go:172] (0xc0008f4140) (1) Data frame sent\nI0408 00:25:42.232782 1827 log.go:172] (0xc00053e160) (0xc0008f4140) Stream removed, broadcasting: 1\nI0408 00:25:42.232932 1827 log.go:172] (0xc00053e160) Go away received\nI0408 00:25:42.233028 1827 log.go:172] (0xc00053e160) (0xc0008f4140) Stream removed, broadcasting: 1\nI0408 00:25:42.233045 1827 log.go:172] (0xc00053e160) (0xc0008f8a00) Stream removed, broadcasting: 3\nI0408 00:25:42.233051 1827 log.go:172] (0xc00053e160) (0xc000312140) Stream removed, broadcasting: 5\n" Apr 8 00:25:42.238: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 8 00:25:42.238: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 8 00:25:42.238: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 8 00:26:12.252: INFO: Deleting all statefulset in ns statefulset-4712 Apr 8 00:26:12.254: INFO: Scaling statefulset ss to 0 Apr 8 00:26:12.281: INFO: Waiting for statefulset status.replicas updated to 0 Apr 8 00:26:12.283: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:26:12.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4712" for this suite. • [SLOW TEST:92.210 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":159,"skipped":2886,"failed":0} SSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:26:12.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 8 00:26:12.360: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 8 00:26:17.363: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 8 00:26:17.363: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 8 00:26:17.432: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-1420 /apis/apps/v1/namespaces/deployment-1420/deployments/test-cleanup-deployment 7d0248c7-ca2e-4f21-966b-41ad1caebe33 6279773 1 2020-04-08 00:26:17 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0055064a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Apr 8 00:26:17.438: INFO: New ReplicaSet "test-cleanup-deployment-577c77b589" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-577c77b589 deployment-1420 /apis/apps/v1/namespaces/deployment-1420/replicasets/test-cleanup-deployment-577c77b589 3f06028c-08c5-4982-8ba7-8cb730782ecd 6279775 1 2020-04-08 00:26:17 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 7d0248c7-ca2e-4f21-966b-41ad1caebe33 0xc005506937 0xc005506938}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 577c77b589,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0055069a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 8 00:26:17.438: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Apr 8 00:26:17.438: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-1420 /apis/apps/v1/namespaces/deployment-1420/replicasets/test-cleanup-controller dd1d077e-69fa-4a58-becc-416e8ef79063 6279774 1 2020-04-08 00:26:12 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 7d0248c7-ca2e-4f21-966b-41ad1caebe33 0xc00550684f 0xc005506860}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0055068c8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 8 00:26:17.494: INFO: Pod "test-cleanup-controller-xkc57" is available: &Pod{ObjectMeta:{test-cleanup-controller-xkc57 test-cleanup-controller- deployment-1420 /api/v1/namespaces/deployment-1420/pods/test-cleanup-controller-xkc57 6c972874-8b15-4a0c-96de-eabf7d21e96b 6279763 0 2020-04-08 00:26:12 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller dd1d077e-69fa-4a58-becc-416e8ef79063 0xc005506e67 0xc005506e68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-628j8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-628j8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-628j8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 00:26:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 00:26:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 00:26:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 00:26:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.195,StartTime:2020-04-08 00:26:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-08 00:26:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2f44435e8ccb772d74c8f3b53d2eedca4f7fcd49f75b254fb4d6c1b1b66359f4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.195,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 00:26:17.494: INFO: Pod "test-cleanup-deployment-577c77b589-jsq6x" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-577c77b589-jsq6x test-cleanup-deployment-577c77b589- deployment-1420 /api/v1/namespaces/deployment-1420/pods/test-cleanup-deployment-577c77b589-jsq6x 3438b55d-2efc-43db-b32f-6350bfadb237 6279780 0 2020-04-08 00:26:17 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-577c77b589 3f06028c-08c5-4982-8ba7-8cb730782ecd 0xc005507007 0xc005507008}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-628j8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-628j8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-628j8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 00:26:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:26:17.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1420" for this suite. • [SLOW TEST:5.242 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":160,"skipped":2891,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:26:17.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-ffc1dcc6-42e7-4ae8-a742-abb93c39fbec STEP: Creating a pod to test consume configMaps Apr 8 00:26:17.745: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9e62ab15-b167-4f5b-afed-7a964ee15eb1" in namespace "projected-9004" to be "Succeeded or Failed" Apr 8 00:26:17.756: INFO: Pod "pod-projected-configmaps-9e62ab15-b167-4f5b-afed-7a964ee15eb1": Phase="Pending", Reason="", readiness=false. Elapsed: 11.660922ms Apr 8 00:26:19.760: INFO: Pod "pod-projected-configmaps-9e62ab15-b167-4f5b-afed-7a964ee15eb1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01577231s Apr 8 00:26:21.765: INFO: Pod "pod-projected-configmaps-9e62ab15-b167-4f5b-afed-7a964ee15eb1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020196258s STEP: Saw pod success Apr 8 00:26:21.765: INFO: Pod "pod-projected-configmaps-9e62ab15-b167-4f5b-afed-7a964ee15eb1" satisfied condition "Succeeded or Failed" Apr 8 00:26:21.768: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-9e62ab15-b167-4f5b-afed-7a964ee15eb1 container projected-configmap-volume-test: STEP: delete the pod Apr 8 00:26:21.820: INFO: Waiting for pod pod-projected-configmaps-9e62ab15-b167-4f5b-afed-7a964ee15eb1 to disappear Apr 8 00:26:21.851: INFO: Pod pod-projected-configmaps-9e62ab15-b167-4f5b-afed-7a964ee15eb1 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:26:21.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9004" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":161,"skipped":2895,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:26:21.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 8 00:26:21.914: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d48059ba-6e30-4ab5-9806-f6f9b13705e7" in namespace "projected-3259" to be "Succeeded or Failed" Apr 8 00:26:21.918: INFO: Pod "downwardapi-volume-d48059ba-6e30-4ab5-9806-f6f9b13705e7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.421308ms Apr 8 00:26:24.012: INFO: Pod "downwardapi-volume-d48059ba-6e30-4ab5-9806-f6f9b13705e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097802314s Apr 8 00:26:26.017: INFO: Pod "downwardapi-volume-d48059ba-6e30-4ab5-9806-f6f9b13705e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.102464474s STEP: Saw pod success Apr 8 00:26:26.017: INFO: Pod "downwardapi-volume-d48059ba-6e30-4ab5-9806-f6f9b13705e7" satisfied condition "Succeeded or Failed" Apr 8 00:26:26.020: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-d48059ba-6e30-4ab5-9806-f6f9b13705e7 container client-container: STEP: delete the pod Apr 8 00:26:26.065: INFO: Waiting for pod downwardapi-volume-d48059ba-6e30-4ab5-9806-f6f9b13705e7 to disappear Apr 8 00:26:26.080: INFO: Pod downwardapi-volume-d48059ba-6e30-4ab5-9806-f6f9b13705e7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:26:26.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3259" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":162,"skipped":2924,"failed":0} SSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:26:26.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:26:30.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3913" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":163,"skipped":2928,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:26:30.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 8 00:26:30.237: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2403' Apr 8 00:26:30.516: INFO: stderr: "" Apr 8 00:26:30.516: INFO: stdout: "replicationcontroller/agnhost-master created\n" Apr 8 00:26:30.516: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2403' Apr 8 00:26:30.765: INFO: stderr: "" Apr 8 00:26:30.765: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 8 00:26:31.770: INFO: Selector matched 1 pods for map[app:agnhost] Apr 8 00:26:31.770: INFO: Found 0 / 1 Apr 8 00:26:32.770: INFO: Selector matched 1 pods for map[app:agnhost] Apr 8 00:26:32.770: INFO: Found 0 / 1 Apr 8 00:26:33.770: INFO: Selector matched 1 pods for map[app:agnhost] Apr 8 00:26:33.770: INFO: Found 1 / 1 Apr 8 00:26:33.770: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 8 00:26:33.773: INFO: Selector matched 1 pods for map[app:agnhost] Apr 8 00:26:33.773: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 8 00:26:33.773: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe pod agnhost-master-f889r --namespace=kubectl-2403' Apr 8 00:26:33.900: INFO: stderr: "" Apr 8 00:26:33.900: INFO: stdout: "Name: agnhost-master-f889r\nNamespace: kubectl-2403\nPriority: 0\nNode: latest-worker2/172.17.0.12\nStart Time: Wed, 08 Apr 2020 00:26:30 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.198\nIPs:\n IP: 10.244.1.198\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://684c5db36425cf82144bdff7701399187873360da3fd0974edc48b742dcdf562\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 08 Apr 2020 00:26:32 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-s6h94 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-s6h94:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-s6h94\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-2403/agnhost-master-f889r to latest-worker2\n Normal Pulled 2s kubelet, latest-worker2 Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n Normal Created 1s kubelet, latest-worker2 Created container agnhost-master\n Normal Started 1s kubelet, latest-worker2 Started container agnhost-master\n" Apr 8 00:26:33.901: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-2403' Apr 8 00:26:34.022: INFO: stderr: "" Apr 8 00:26:34.022: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-2403\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-f889r\n" Apr 8 00:26:34.022: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-2403' Apr 8 00:26:34.131: INFO: stderr: "" Apr 8 00:26:34.131: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-2403\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.34.189\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.198:6379\nSession Affinity: None\nEvents: \n" Apr 8 00:26:34.135: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe node latest-control-plane' Apr 8 00:26:34.263: INFO: stderr: "" Apr 8 00:26:34.263: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:27:32 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Wed, 08 Apr 2020 00:26:32 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Wed, 08 Apr 2020 00:22:46 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 08 Apr 2020 00:22:46 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 08 Apr 2020 00:22:46 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 08 Apr 2020 00:22:46 +0000 Sun, 15 Mar 2020 18:28:05 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 96fd1b5d260b433d8f617f455164eb5a\n System UUID: 611bedf3-8581-4e6e-a43b-01a437bb59ad\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.17.0\n Kube-Proxy Version: v1.17.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-f7wtl 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 23d\n kube-system coredns-6955765f44-lq4t7 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 23d\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 23d\n kube-system kindnet-sx5s7 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 23d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 23d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 23d\n kube-system kube-proxy-jpqvf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 23d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 23d\n local-path-storage local-path-provisioner-7745554f7f-fmsmz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 23d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Apr 8 00:26:34.263: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe namespace kubectl-2403' Apr 8 00:26:34.366: INFO: stderr: "" Apr 8 00:26:34.367: INFO: stdout: "Name: kubectl-2403\nLabels: e2e-framework=kubectl\n e2e-run=f57357f1-1185-4755-9a1b-488e554b7439\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:26:34.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2403" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":275,"completed":164,"skipped":2931,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:26:34.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:27:34.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2022" for this suite. • [SLOW TEST:60.082 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":165,"skipped":2947,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:27:34.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-06f576f9-1fda-4f26-8bc4-fa972d765d37 STEP: Creating a pod to test consume secrets Apr 8 00:27:34.557: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-69c738ba-4589-4494-a17d-159434aaba49" in namespace "projected-4818" to be "Succeeded or Failed" Apr 8 00:27:34.560: INFO: Pod "pod-projected-secrets-69c738ba-4589-4494-a17d-159434aaba49": Phase="Pending", Reason="", readiness=false. Elapsed: 3.836381ms Apr 8 00:27:36.565: INFO: Pod "pod-projected-secrets-69c738ba-4589-4494-a17d-159434aaba49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008430661s Apr 8 00:27:38.568: INFO: Pod "pod-projected-secrets-69c738ba-4589-4494-a17d-159434aaba49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011883172s STEP: Saw pod success Apr 8 00:27:38.569: INFO: Pod "pod-projected-secrets-69c738ba-4589-4494-a17d-159434aaba49" satisfied condition "Succeeded or Failed" Apr 8 00:27:38.572: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-69c738ba-4589-4494-a17d-159434aaba49 container projected-secret-volume-test: STEP: delete the pod Apr 8 00:27:38.610: INFO: Waiting for pod pod-projected-secrets-69c738ba-4589-4494-a17d-159434aaba49 to disappear Apr 8 00:27:38.620: INFO: Pod pod-projected-secrets-69c738ba-4589-4494-a17d-159434aaba49 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:27:38.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4818" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":166,"skipped":2948,"failed":0} SSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:27:38.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-0d2200f1-fb1f-48b8-81a8-1f9797d18244 in namespace container-probe-5175 Apr 8 00:27:42.720: INFO: Started pod busybox-0d2200f1-fb1f-48b8-81a8-1f9797d18244 in namespace container-probe-5175 STEP: checking the pod's current state and verifying that restartCount is present Apr 8 00:27:42.723: INFO: Initial restart count of pod busybox-0d2200f1-fb1f-48b8-81a8-1f9797d18244 is 0 Apr 8 00:28:32.839: INFO: Restart count of pod container-probe-5175/busybox-0d2200f1-fb1f-48b8-81a8-1f9797d18244 is now 1 (50.116280605s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:28:32.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5175" for this suite. • [SLOW TEST:54.252 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":167,"skipped":2955,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:28:32.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 8 00:28:33.483: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 8 00:28:35.494: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721902513, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721902513, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721902513, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721902513, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 8 00:28:37.499: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721902513, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721902513, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721902513, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721902513, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 8 00:28:40.519: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:28:52.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3714" for this suite. STEP: Destroying namespace "webhook-3714-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:19.971 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":168,"skipped":2975,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:28:52.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 8 00:28:57.423: INFO: Successfully updated pod "pod-update-67d9414a-3846-4489-8781-a0adff2e3080" STEP: verifying the updated pod is in kubernetes Apr 8 00:28:57.432: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:28:57.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7417" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":169,"skipped":2986,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:28:57.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-1814 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 8 00:28:57.488: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 8 00:28:57.542: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 8 00:28:59.566: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 8 00:29:01.546: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 8 00:29:03.546: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 8 00:29:05.546: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 8 00:29:07.547: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 8 00:29:09.546: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 8 00:29:11.546: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 8 00:29:11.551: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 8 00:29:13.556: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 8 00:29:17.591: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.177:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1814 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 00:29:17.591: INFO: >>> kubeConfig: /root/.kube/config I0408 00:29:17.629084 7 log.go:172] (0xc002623760) (0xc000f1ab40) Create stream I0408 00:29:17.629180 7 log.go:172] (0xc002623760) (0xc000f1ab40) Stream added, broadcasting: 1 I0408 00:29:17.631114 7 log.go:172] (0xc002623760) Reply frame received for 1 I0408 00:29:17.631156 7 log.go:172] (0xc002623760) (0xc0019c79a0) Create stream I0408 00:29:17.631165 7 log.go:172] (0xc002623760) (0xc0019c79a0) Stream added, broadcasting: 3 I0408 00:29:17.632152 7 log.go:172] (0xc002623760) Reply frame received for 3 I0408 00:29:17.632196 7 log.go:172] (0xc002623760) (0xc000f1ac80) Create stream I0408 00:29:17.632219 7 log.go:172] (0xc002623760) (0xc000f1ac80) Stream added, broadcasting: 5 I0408 00:29:17.633662 7 log.go:172] (0xc002623760) Reply frame received for 5 I0408 00:29:17.708720 7 log.go:172] (0xc002623760) Data frame received for 3 I0408 00:29:17.708758 7 log.go:172] (0xc0019c79a0) (3) Data frame handling I0408 00:29:17.708779 7 log.go:172] (0xc0019c79a0) (3) Data frame sent I0408 00:29:17.708903 7 log.go:172] (0xc002623760) Data frame received for 5 I0408 00:29:17.708931 7 log.go:172] (0xc000f1ac80) (5) Data frame handling I0408 00:29:17.709031 7 log.go:172] (0xc002623760) Data frame received for 3 I0408 00:29:17.709054 7 log.go:172] (0xc0019c79a0) (3) Data frame handling I0408 00:29:17.711186 7 log.go:172] (0xc002623760) Data frame received for 1 I0408 00:29:17.711219 7 log.go:172] (0xc000f1ab40) (1) Data frame handling I0408 00:29:17.711248 7 log.go:172] (0xc000f1ab40) (1) Data frame sent I0408 00:29:17.711275 7 log.go:172] (0xc002623760) (0xc000f1ab40) Stream removed, broadcasting: 1 I0408 00:29:17.711308 7 log.go:172] (0xc002623760) Go away received I0408 00:29:17.711437 7 log.go:172] (0xc002623760) (0xc000f1ab40) Stream removed, broadcasting: 1 I0408 00:29:17.711462 7 log.go:172] (0xc002623760) (0xc0019c79a0) Stream removed, broadcasting: 3 I0408 00:29:17.711475 7 log.go:172] (0xc002623760) (0xc000f1ac80) Stream removed, broadcasting: 5 Apr 8 00:29:17.711: INFO: Found all expected endpoints: [netserver-0] Apr 8 00:29:17.714: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.201:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1814 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 00:29:17.714: INFO: >>> kubeConfig: /root/.kube/config I0408 00:29:17.747707 7 log.go:172] (0xc0061784d0) (0xc0014ca140) Create stream I0408 00:29:17.747751 7 log.go:172] (0xc0061784d0) (0xc0014ca140) Stream added, broadcasting: 1 I0408 00:29:17.749870 7 log.go:172] (0xc0061784d0) Reply frame received for 1 I0408 00:29:17.749905 7 log.go:172] (0xc0061784d0) (0xc001164820) Create stream I0408 00:29:17.749916 7 log.go:172] (0xc0061784d0) (0xc001164820) Stream added, broadcasting: 3 I0408 00:29:17.750902 7 log.go:172] (0xc0061784d0) Reply frame received for 3 I0408 00:29:17.750949 7 log.go:172] (0xc0061784d0) (0xc001164e60) Create stream I0408 00:29:17.750965 7 log.go:172] (0xc0061784d0) (0xc001164e60) Stream added, broadcasting: 5 I0408 00:29:17.751842 7 log.go:172] (0xc0061784d0) Reply frame received for 5 I0408 00:29:17.816488 7 log.go:172] (0xc0061784d0) Data frame received for 3 I0408 00:29:17.816519 7 log.go:172] (0xc001164820) (3) Data frame handling I0408 00:29:17.816553 7 log.go:172] (0xc001164820) (3) Data frame sent I0408 00:29:17.816566 7 log.go:172] (0xc0061784d0) Data frame received for 3 I0408 00:29:17.816585 7 log.go:172] (0xc001164820) (3) Data frame handling I0408 00:29:17.816625 7 log.go:172] (0xc0061784d0) Data frame received for 5 I0408 00:29:17.816654 7 log.go:172] (0xc001164e60) (5) Data frame handling I0408 00:29:17.818222 7 log.go:172] (0xc0061784d0) Data frame received for 1 I0408 00:29:17.818237 7 log.go:172] (0xc0014ca140) (1) Data frame handling I0408 00:29:17.818250 7 log.go:172] (0xc0014ca140) (1) Data frame sent I0408 00:29:17.818266 7 log.go:172] (0xc0061784d0) (0xc0014ca140) Stream removed, broadcasting: 1 I0408 00:29:17.818290 7 log.go:172] (0xc0061784d0) Go away received I0408 00:29:17.818432 7 log.go:172] (0xc0061784d0) (0xc0014ca140) Stream removed, broadcasting: 1 I0408 00:29:17.818463 7 log.go:172] (0xc0061784d0) (0xc001164820) Stream removed, broadcasting: 3 I0408 00:29:17.818476 7 log.go:172] (0xc0061784d0) (0xc001164e60) Stream removed, broadcasting: 5 Apr 8 00:29:17.818: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:29:17.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1814" for this suite. • [SLOW TEST:20.387 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":170,"skipped":3008,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:29:17.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 8 00:29:17.888: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-5218' Apr 8 00:29:17.995: INFO: stderr: "" Apr 8 00:29:17.995: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Apr 8 00:29:23.046: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-5218 -o json' Apr 8 00:29:23.144: INFO: stderr: "" Apr 8 00:29:23.144: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-04-08T00:29:17Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-5218\",\n \"resourceVersion\": \"6280734\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-5218/pods/e2e-test-httpd-pod\",\n \"uid\": \"fc452484-7e68-4e88-9e97-684dd1c671be\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-s4dxv\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-s4dxv\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-s4dxv\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-08T00:29:18Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-08T00:29:20Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-08T00:29:20Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-08T00:29:17Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://cec9c1aa3832cf7615ee0d188ad4b7c5c9c50d184d8e951785394c88257d3836\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-04-08T00:29:20Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.12\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.202\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.202\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-04-08T00:29:18Z\"\n }\n}\n" STEP: replace the image in the pod Apr 8 00:29:23.144: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-5218' Apr 8 00:29:23.549: INFO: stderr: "" Apr 8 00:29:23.549: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Apr 8 00:29:23.580: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5218' Apr 8 00:29:32.981: INFO: stderr: "" Apr 8 00:29:32.981: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:29:32.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5218" for this suite. • [SLOW TEST:15.161 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":275,"completed":171,"skipped":3030,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:29:32.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 8 00:29:33.036: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 8 00:29:33.080: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 8 00:29:38.093: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 8 00:29:38.093: INFO: Creating deployment "test-rolling-update-deployment" Apr 8 00:29:38.101: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 8 00:29:38.136: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 8 00:29:40.145: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 8 00:29:40.148: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721902578, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721902578, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721902578, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721902578, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-664dd8fc7f\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 8 00:29:42.153: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 8 00:29:42.161: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-6764 /apis/apps/v1/namespaces/deployment-6764/deployments/test-rolling-update-deployment 2bea4790-3017-4e6a-beac-e29203af3f48 6280888 1 2020-04-08 00:29:38 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004967868 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-08 00:29:38 +0000 UTC,LastTransitionTime:2020-04-08 00:29:38 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-664dd8fc7f" has successfully progressed.,LastUpdateTime:2020-04-08 00:29:40 +0000 UTC,LastTransitionTime:2020-04-08 00:29:38 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 8 00:29:42.165: INFO: New ReplicaSet "test-rolling-update-deployment-664dd8fc7f" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f deployment-6764 /apis/apps/v1/namespaces/deployment-6764/replicasets/test-rolling-update-deployment-664dd8fc7f b8ed33d7-cd03-4e09-80c2-1faab537e7da 6280877 1 2020-04-08 00:29:38 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 2bea4790-3017-4e6a-beac-e29203af3f48 0xc004967d87 0xc004967d88}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 664dd8fc7f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004967df8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 8 00:29:42.165: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 8 00:29:42.165: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-6764 /apis/apps/v1/namespaces/deployment-6764/replicasets/test-rolling-update-controller ae8c6f19-b8e9-4cf6-8b9c-931d40df7981 6280886 2 2020-04-08 00:29:33 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 2bea4790-3017-4e6a-beac-e29203af3f48 0xc004967cb7 0xc004967cb8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004967d18 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 8 00:29:42.169: INFO: Pod "test-rolling-update-deployment-664dd8fc7f-gtwkb" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f-gtwkb test-rolling-update-deployment-664dd8fc7f- deployment-6764 /api/v1/namespaces/deployment-6764/pods/test-rolling-update-deployment-664dd8fc7f-gtwkb cd8af0b7-f1a5-4ddf-96e2-2d6848ecb646 6280876 0 2020-04-08 00:29:38 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-664dd8fc7f b8ed33d7-cd03-4e09-80c2-1faab537e7da 0xc005554eb7 0xc005554eb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8dckh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8dckh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8dckh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 00:29:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 00:29:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 00:29:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 00:29:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.179,StartTime:2020-04-08 00:29:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-08 00:29:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://f3c2956afd40f5d56658371ed4fa2eb30009644c8a1a95b40b121bf31824badf,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.179,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:29:42.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6764" for this suite. • [SLOW TEST:9.189 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":172,"skipped":3063,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:29:42.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 8 00:29:42.292: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"b009016d-1182-47ab-8a3f-6fe81b25a578", Controller:(*bool)(0xc005555aba), BlockOwnerDeletion:(*bool)(0xc005555abb)}} Apr 8 00:29:42.307: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"356e2d41-f314-44c7-bcec-1ceb1dc30af1", Controller:(*bool)(0xc005bc620a), BlockOwnerDeletion:(*bool)(0xc005bc620b)}} Apr 8 00:29:42.334: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"94b75b20-2468-417a-9335-77d4c2df99bc", Controller:(*bool)(0xc005555c92), BlockOwnerDeletion:(*bool)(0xc005555c93)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:29:47.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8101" for this suite. • [SLOW TEST:5.206 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":173,"skipped":3093,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:29:47.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-9e29fd88-71dd-45f9-875c-4e1b4886048a STEP: Creating a pod to test consume secrets Apr 8 00:29:47.481: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4005eb63-b25a-41bc-8909-6063e222f1e3" in namespace "projected-5409" to be "Succeeded or Failed" Apr 8 00:29:47.484: INFO: Pod "pod-projected-secrets-4005eb63-b25a-41bc-8909-6063e222f1e3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.196401ms Apr 8 00:29:49.488: INFO: Pod "pod-projected-secrets-4005eb63-b25a-41bc-8909-6063e222f1e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006813346s Apr 8 00:29:51.492: INFO: Pod "pod-projected-secrets-4005eb63-b25a-41bc-8909-6063e222f1e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011261924s STEP: Saw pod success Apr 8 00:29:51.492: INFO: Pod "pod-projected-secrets-4005eb63-b25a-41bc-8909-6063e222f1e3" satisfied condition "Succeeded or Failed" Apr 8 00:29:51.496: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-4005eb63-b25a-41bc-8909-6063e222f1e3 container projected-secret-volume-test: STEP: delete the pod Apr 8 00:29:51.540: INFO: Waiting for pod pod-projected-secrets-4005eb63-b25a-41bc-8909-6063e222f1e3 to disappear Apr 8 00:29:51.544: INFO: Pod pod-projected-secrets-4005eb63-b25a-41bc-8909-6063e222f1e3 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:29:51.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5409" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":174,"skipped":3113,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:29:51.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 8 00:29:51.615: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 8 00:29:51.623: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 00:29:51.640: INFO: Number of nodes with available pods: 0 Apr 8 00:29:51.640: INFO: Node latest-worker is running more than one daemon pod Apr 8 00:29:52.658: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 00:29:52.691: INFO: Number of nodes with available pods: 0 Apr 8 00:29:52.692: INFO: Node latest-worker is running more than one daemon pod Apr 8 00:29:53.724: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 00:29:53.756: INFO: Number of nodes with available pods: 0 Apr 8 00:29:53.756: INFO: Node latest-worker is running more than one daemon pod Apr 8 00:29:54.646: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 00:29:54.660: INFO: Number of nodes with available pods: 0 Apr 8 00:29:54.660: INFO: Node latest-worker is running more than one daemon pod Apr 8 00:29:55.646: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 00:29:55.650: INFO: Number of nodes with available pods: 2 Apr 8 00:29:55.650: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 8 00:29:55.714: INFO: Wrong image for pod: daemon-set-bt42x. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 00:29:55.714: INFO: Wrong image for pod: daemon-set-xzf4t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 00:29:55.728: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 00:29:56.732: INFO: Wrong image for pod: daemon-set-bt42x. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 00:29:56.732: INFO: Wrong image for pod: daemon-set-xzf4t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 00:29:56.734: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 00:29:57.744: INFO: Wrong image for pod: daemon-set-bt42x. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 00:29:57.744: INFO: Pod daemon-set-bt42x is not available Apr 8 00:29:57.744: INFO: Wrong image for pod: daemon-set-xzf4t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 00:29:57.795: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 00:29:58.732: INFO: Wrong image for pod: daemon-set-bt42x. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 00:29:58.732: INFO: Pod daemon-set-bt42x is not available Apr 8 00:29:58.732: INFO: Wrong image for pod: daemon-set-xzf4t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 00:29:58.736: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 00:29:59.732: INFO: Pod daemon-set-cpvwl is not available Apr 8 00:29:59.732: INFO: Wrong image for pod: daemon-set-xzf4t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 00:29:59.736: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 00:30:00.746: INFO: Pod daemon-set-cpvwl is not available Apr 8 00:30:00.746: INFO: Wrong image for pod: daemon-set-xzf4t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 00:30:00.749: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 00:30:01.733: INFO: Pod daemon-set-cpvwl is not available Apr 8 00:30:01.733: INFO: Wrong image for pod: daemon-set-xzf4t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 00:30:01.736: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 00:30:02.733: INFO: Wrong image for pod: daemon-set-xzf4t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 00:30:02.738: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 00:30:03.732: INFO: Wrong image for pod: daemon-set-xzf4t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 00:30:03.732: INFO: Pod daemon-set-xzf4t is not available Apr 8 00:30:03.736: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 00:30:04.733: INFO: Wrong image for pod: daemon-set-xzf4t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 00:30:04.733: INFO: Pod daemon-set-xzf4t is not available Apr 8 00:30:04.737: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 00:30:05.732: INFO: Wrong image for pod: daemon-set-xzf4t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 00:30:05.732: INFO: Pod daemon-set-xzf4t is not available Apr 8 00:30:05.735: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 00:30:06.733: INFO: Wrong image for pod: daemon-set-xzf4t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 00:30:06.733: INFO: Pod daemon-set-xzf4t is not available Apr 8 00:30:06.738: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 00:30:07.732: INFO: Wrong image for pod: daemon-set-xzf4t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 00:30:07.732: INFO: Pod daemon-set-xzf4t is not available Apr 8 00:30:07.735: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 00:30:08.732: INFO: Wrong image for pod: daemon-set-xzf4t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 00:30:08.732: INFO: Pod daemon-set-xzf4t is not available Apr 8 00:30:08.736: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 00:30:09.732: INFO: Wrong image for pod: daemon-set-xzf4t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 00:30:09.733: INFO: Pod daemon-set-xzf4t is not available Apr 8 00:30:09.765: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 00:30:10.743: INFO: Wrong image for pod: daemon-set-xzf4t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 00:30:10.743: INFO: Pod daemon-set-xzf4t is not available Apr 8 00:30:10.747: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 00:30:11.731: INFO: Wrong image for pod: daemon-set-xzf4t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 00:30:11.731: INFO: Pod daemon-set-xzf4t is not available Apr 8 00:30:11.735: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 00:30:12.733: INFO: Wrong image for pod: daemon-set-xzf4t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 00:30:12.733: INFO: Pod daemon-set-xzf4t is not available Apr 8 00:30:12.737: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 00:30:13.843: INFO: Pod daemon-set-68ffx is not available Apr 8 00:30:13.867: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 8 00:30:13.881: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 00:30:13.884: INFO: Number of nodes with available pods: 1 Apr 8 00:30:13.884: INFO: Node latest-worker2 is running more than one daemon pod Apr 8 00:30:14.890: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 00:30:14.893: INFO: Number of nodes with available pods: 1 Apr 8 00:30:14.893: INFO: Node latest-worker2 is running more than one daemon pod Apr 8 00:30:15.888: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 00:30:15.892: INFO: Number of nodes with available pods: 1 Apr 8 00:30:15.892: INFO: Node latest-worker2 is running more than one daemon pod Apr 8 00:30:16.889: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 00:30:16.893: INFO: Number of nodes with available pods: 2 Apr 8 00:30:16.893: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2402, will wait for the garbage collector to delete the pods Apr 8 00:30:16.968: INFO: Deleting DaemonSet.extensions daemon-set took: 6.246039ms Apr 8 00:30:17.268: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.23681ms Apr 8 00:30:23.072: INFO: Number of nodes with available pods: 0 Apr 8 00:30:23.072: INFO: Number of running nodes: 0, number of available pods: 0 Apr 8 00:30:23.075: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2402/daemonsets","resourceVersion":"6281192"},"items":null} Apr 8 00:30:23.078: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2402/pods","resourceVersion":"6281192"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:30:23.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2402" for this suite. • [SLOW TEST:31.545 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":175,"skipped":3126,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:30:23.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 8 00:30:23.211: INFO: Creating ReplicaSet my-hostname-basic-2f83e02c-d05e-4c9f-84b2-d1e90bd1d109 Apr 8 00:30:23.234: INFO: Pod name my-hostname-basic-2f83e02c-d05e-4c9f-84b2-d1e90bd1d109: Found 0 pods out of 1 Apr 8 00:30:28.249: INFO: Pod name my-hostname-basic-2f83e02c-d05e-4c9f-84b2-d1e90bd1d109: Found 1 pods out of 1 Apr 8 00:30:28.249: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-2f83e02c-d05e-4c9f-84b2-d1e90bd1d109" is running Apr 8 00:30:28.260: INFO: Pod "my-hostname-basic-2f83e02c-d05e-4c9f-84b2-d1e90bd1d109-hr277" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-08 00:30:23 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-08 00:30:25 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-08 00:30:25 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-08 00:30:23 +0000 UTC Reason: Message:}]) Apr 8 00:30:28.260: INFO: Trying to dial the pod Apr 8 00:30:33.272: INFO: Controller my-hostname-basic-2f83e02c-d05e-4c9f-84b2-d1e90bd1d109: Got expected result from replica 1 [my-hostname-basic-2f83e02c-d05e-4c9f-84b2-d1e90bd1d109-hr277]: "my-hostname-basic-2f83e02c-d05e-4c9f-84b2-d1e90bd1d109-hr277", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:30:33.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3138" for this suite. • [SLOW TEST:10.183 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":176,"skipped":3142,"failed":0} SS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:30:33.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 8 00:30:33.417: INFO: The status of Pod test-webserver-637f9550-b3c0-4d56-ba8d-2ae8cae81bbe is Pending, waiting for it to be Running (with Ready = true) Apr 8 00:30:35.421: INFO: The status of Pod test-webserver-637f9550-b3c0-4d56-ba8d-2ae8cae81bbe is Pending, waiting for it to be Running (with Ready = true) Apr 8 00:30:37.421: INFO: The status of Pod test-webserver-637f9550-b3c0-4d56-ba8d-2ae8cae81bbe is Running (Ready = false) Apr 8 00:30:39.421: INFO: The status of Pod test-webserver-637f9550-b3c0-4d56-ba8d-2ae8cae81bbe is Running (Ready = false) Apr 8 00:30:41.421: INFO: The status of Pod test-webserver-637f9550-b3c0-4d56-ba8d-2ae8cae81bbe is Running (Ready = false) Apr 8 00:30:43.422: INFO: The status of Pod test-webserver-637f9550-b3c0-4d56-ba8d-2ae8cae81bbe is Running (Ready = false) Apr 8 00:30:45.422: INFO: The status of Pod test-webserver-637f9550-b3c0-4d56-ba8d-2ae8cae81bbe is Running (Ready = false) Apr 8 00:30:47.421: INFO: The status of Pod test-webserver-637f9550-b3c0-4d56-ba8d-2ae8cae81bbe is Running (Ready = false) Apr 8 00:30:49.421: INFO: The status of Pod test-webserver-637f9550-b3c0-4d56-ba8d-2ae8cae81bbe is Running (Ready = false) Apr 8 00:30:51.421: INFO: The status of Pod test-webserver-637f9550-b3c0-4d56-ba8d-2ae8cae81bbe is Running (Ready = false) Apr 8 00:30:53.421: INFO: The status of Pod test-webserver-637f9550-b3c0-4d56-ba8d-2ae8cae81bbe is Running (Ready = true) Apr 8 00:30:53.424: INFO: Container started at 2020-04-08 00:30:35 +0000 UTC, pod became ready at 2020-04-08 00:30:53 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:30:53.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-264" for this suite. • [SLOW TEST:20.153 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":177,"skipped":3144,"failed":0} SS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:30:53.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 8 00:30:54.001: INFO: Pod name wrapped-volume-race-06d66c81-3dfc-45c7-b990-86e18fceb86d: Found 0 pods out of 5 Apr 8 00:30:59.042: INFO: Pod name wrapped-volume-race-06d66c81-3dfc-45c7-b990-86e18fceb86d: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-06d66c81-3dfc-45c7-b990-86e18fceb86d in namespace emptydir-wrapper-5011, will wait for the garbage collector to delete the pods Apr 8 00:31:13.139: INFO: Deleting ReplicationController wrapped-volume-race-06d66c81-3dfc-45c7-b990-86e18fceb86d took: 6.727645ms Apr 8 00:31:13.439: INFO: Terminating ReplicationController wrapped-volume-race-06d66c81-3dfc-45c7-b990-86e18fceb86d pods took: 300.247494ms STEP: Creating RC which spawns configmap-volume pods Apr 8 00:31:23.322: INFO: Pod name wrapped-volume-race-0b8d9421-8084-412f-a4da-0a963207ddb8: Found 0 pods out of 5 Apr 8 00:31:28.328: INFO: Pod name wrapped-volume-race-0b8d9421-8084-412f-a4da-0a963207ddb8: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-0b8d9421-8084-412f-a4da-0a963207ddb8 in namespace emptydir-wrapper-5011, will wait for the garbage collector to delete the pods Apr 8 00:31:42.456: INFO: Deleting ReplicationController wrapped-volume-race-0b8d9421-8084-412f-a4da-0a963207ddb8 took: 8.201561ms Apr 8 00:31:42.757: INFO: Terminating ReplicationController wrapped-volume-race-0b8d9421-8084-412f-a4da-0a963207ddb8 pods took: 300.440531ms STEP: Creating RC which spawns configmap-volume pods Apr 8 00:31:54.089: INFO: Pod name wrapped-volume-race-07fbfbde-6547-44de-ad0e-8d0bba1424f8: Found 0 pods out of 5 Apr 8 00:31:59.110: INFO: Pod name wrapped-volume-race-07fbfbde-6547-44de-ad0e-8d0bba1424f8: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-07fbfbde-6547-44de-ad0e-8d0bba1424f8 in namespace emptydir-wrapper-5011, will wait for the garbage collector to delete the pods Apr 8 00:32:13.198: INFO: Deleting ReplicationController wrapped-volume-race-07fbfbde-6547-44de-ad0e-8d0bba1424f8 took: 8.180804ms Apr 8 00:32:13.598: INFO: Terminating ReplicationController wrapped-volume-race-07fbfbde-6547-44de-ad0e-8d0bba1424f8 pods took: 400.259159ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:32:23.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5011" for this suite. • [SLOW TEST:90.264 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":178,"skipped":3146,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:32:23.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-b2569c15-f23f-4bd0-bf09-9d03cfb0d4f9 STEP: Creating a pod to test consume configMaps Apr 8 00:32:23.822: INFO: Waiting up to 5m0s for pod "pod-configmaps-47f89cae-ba72-4b67-8f5d-278874b9ada0" in namespace "configmap-3016" to be "Succeeded or Failed" Apr 8 00:32:23.826: INFO: Pod "pod-configmaps-47f89cae-ba72-4b67-8f5d-278874b9ada0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.556656ms Apr 8 00:32:25.928: INFO: Pod "pod-configmaps-47f89cae-ba72-4b67-8f5d-278874b9ada0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105700629s Apr 8 00:32:27.932: INFO: Pod "pod-configmaps-47f89cae-ba72-4b67-8f5d-278874b9ada0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.109676162s STEP: Saw pod success Apr 8 00:32:27.932: INFO: Pod "pod-configmaps-47f89cae-ba72-4b67-8f5d-278874b9ada0" satisfied condition "Succeeded or Failed" Apr 8 00:32:27.934: INFO: Trying to get logs from node latest-worker pod pod-configmaps-47f89cae-ba72-4b67-8f5d-278874b9ada0 container configmap-volume-test: STEP: delete the pod Apr 8 00:32:28.025: INFO: Waiting for pod pod-configmaps-47f89cae-ba72-4b67-8f5d-278874b9ada0 to disappear Apr 8 00:32:28.032: INFO: Pod pod-configmaps-47f89cae-ba72-4b67-8f5d-278874b9ada0 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:32:28.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3016" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":179,"skipped":3148,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:32:28.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on node default medium Apr 8 00:32:28.118: INFO: Waiting up to 5m0s for pod "pod-f2dcdc17-3d80-4086-a257-35cf58fdcfa3" in namespace "emptydir-4262" to be "Succeeded or Failed" Apr 8 00:32:28.122: INFO: Pod "pod-f2dcdc17-3d80-4086-a257-35cf58fdcfa3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.56575ms Apr 8 00:32:30.203: INFO: Pod "pod-f2dcdc17-3d80-4086-a257-35cf58fdcfa3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084624441s Apr 8 00:32:32.210: INFO: Pod "pod-f2dcdc17-3d80-4086-a257-35cf58fdcfa3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092117672s STEP: Saw pod success Apr 8 00:32:32.210: INFO: Pod "pod-f2dcdc17-3d80-4086-a257-35cf58fdcfa3" satisfied condition "Succeeded or Failed" Apr 8 00:32:32.216: INFO: Trying to get logs from node latest-worker2 pod pod-f2dcdc17-3d80-4086-a257-35cf58fdcfa3 container test-container: STEP: delete the pod Apr 8 00:32:32.288: INFO: Waiting for pod pod-f2dcdc17-3d80-4086-a257-35cf58fdcfa3 to disappear Apr 8 00:32:32.294: INFO: Pod pod-f2dcdc17-3d80-4086-a257-35cf58fdcfa3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:32:32.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4262" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":180,"skipped":3179,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:32:32.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 8 00:32:32.539: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 8 00:32:37.543: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:32:38.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-294" for this suite. • [SLOW TEST:6.258 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":181,"skipped":3195,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:32:38.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-ba5ec4a2-e51c-44e7-91df-a3a48e066867 STEP: Creating a pod to test consume configMaps Apr 8 00:32:38.663: INFO: Waiting up to 5m0s for pod "pod-configmaps-38a8b92a-fff5-4500-b9ec-233ea458afbd" in namespace "configmap-7496" to be "Succeeded or Failed" Apr 8 00:32:38.701: INFO: Pod "pod-configmaps-38a8b92a-fff5-4500-b9ec-233ea458afbd": Phase="Pending", Reason="", readiness=false. Elapsed: 37.78085ms Apr 8 00:32:40.705: INFO: Pod "pod-configmaps-38a8b92a-fff5-4500-b9ec-233ea458afbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041973733s Apr 8 00:32:42.708: INFO: Pod "pod-configmaps-38a8b92a-fff5-4500-b9ec-233ea458afbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045346118s STEP: Saw pod success Apr 8 00:32:42.708: INFO: Pod "pod-configmaps-38a8b92a-fff5-4500-b9ec-233ea458afbd" satisfied condition "Succeeded or Failed" Apr 8 00:32:42.711: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-38a8b92a-fff5-4500-b9ec-233ea458afbd container configmap-volume-test: STEP: delete the pod Apr 8 00:32:42.733: INFO: Waiting for pod pod-configmaps-38a8b92a-fff5-4500-b9ec-233ea458afbd to disappear Apr 8 00:32:42.737: INFO: Pod pod-configmaps-38a8b92a-fff5-4500-b9ec-233ea458afbd no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:32:42.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7496" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":182,"skipped":3210,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:32:42.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:32:46.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1629" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":183,"skipped":3222,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:32:46.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0408 00:32:47.979644 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 8 00:32:47.979: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:32:47.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-77" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":184,"skipped":3224,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:32:48.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-7025 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 8 00:32:48.181: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 8 00:32:48.267: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 8 00:32:50.320: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 8 00:32:52.270: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 8 00:32:54.270: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 8 00:32:56.287: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 8 00:32:58.273: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 8 00:33:00.271: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 8 00:33:02.271: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 8 00:33:04.271: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 8 00:33:06.271: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 8 00:33:08.271: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 8 00:33:10.271: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 8 00:33:10.277: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 8 00:33:14.305: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.194:8080/dial?request=hostname&protocol=udp&host=10.244.2.193&port=8081&tries=1'] Namespace:pod-network-test-7025 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 00:33:14.305: INFO: >>> kubeConfig: /root/.kube/config I0408 00:33:14.338929 7 log.go:172] (0xc002dcc8f0) (0xc0011bd180) Create stream I0408 00:33:14.338966 7 log.go:172] (0xc002dcc8f0) (0xc0011bd180) Stream added, broadcasting: 1 I0408 00:33:14.340988 7 log.go:172] (0xc002dcc8f0) Reply frame received for 1 I0408 00:33:14.341033 7 log.go:172] (0xc002dcc8f0) (0xc0011bd220) Create stream I0408 00:33:14.341044 7 log.go:172] (0xc002dcc8f0) (0xc0011bd220) Stream added, broadcasting: 3 I0408 00:33:14.342103 7 log.go:172] (0xc002dcc8f0) Reply frame received for 3 I0408 00:33:14.342140 7 log.go:172] (0xc002dcc8f0) (0xc000b53e00) Create stream I0408 00:33:14.342167 7 log.go:172] (0xc002dcc8f0) (0xc000b53e00) Stream added, broadcasting: 5 I0408 00:33:14.343138 7 log.go:172] (0xc002dcc8f0) Reply frame received for 5 I0408 00:33:14.439860 7 log.go:172] (0xc002dcc8f0) Data frame received for 3 I0408 00:33:14.439910 7 log.go:172] (0xc0011bd220) (3) Data frame handling I0408 00:33:14.439941 7 log.go:172] (0xc0011bd220) (3) Data frame sent I0408 00:33:14.440151 7 log.go:172] (0xc002dcc8f0) Data frame received for 3 I0408 00:33:14.440179 7 log.go:172] (0xc0011bd220) (3) Data frame handling I0408 00:33:14.440577 7 log.go:172] (0xc002dcc8f0) Data frame received for 5 I0408 00:33:14.440599 7 log.go:172] (0xc000b53e00) (5) Data frame handling I0408 00:33:14.442240 7 log.go:172] (0xc002dcc8f0) Data frame received for 1 I0408 00:33:14.442278 7 log.go:172] (0xc0011bd180) (1) Data frame handling I0408 00:33:14.442301 7 log.go:172] (0xc0011bd180) (1) Data frame sent I0408 00:33:14.442324 7 log.go:172] (0xc002dcc8f0) (0xc0011bd180) Stream removed, broadcasting: 1 I0408 00:33:14.442463 7 log.go:172] (0xc002dcc8f0) Go away received I0408 00:33:14.442595 7 log.go:172] (0xc002dcc8f0) (0xc0011bd180) Stream removed, broadcasting: 1 I0408 00:33:14.442635 7 log.go:172] (0xc002dcc8f0) (0xc0011bd220) Stream removed, broadcasting: 3 I0408 00:33:14.442648 7 log.go:172] (0xc002dcc8f0) (0xc000b53e00) Stream removed, broadcasting: 5 Apr 8 00:33:14.442: INFO: Waiting for responses: map[] Apr 8 00:33:14.448: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.194:8080/dial?request=hostname&protocol=udp&host=10.244.1.224&port=8081&tries=1'] Namespace:pod-network-test-7025 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 00:33:14.448: INFO: >>> kubeConfig: /root/.kube/config I0408 00:33:14.476613 7 log.go:172] (0xc002d90840) (0xc002a2ad20) Create stream I0408 00:33:14.476694 7 log.go:172] (0xc002d90840) (0xc002a2ad20) Stream added, broadcasting: 1 I0408 00:33:14.478445 7 log.go:172] (0xc002d90840) Reply frame received for 1 I0408 00:33:14.478475 7 log.go:172] (0xc002d90840) (0xc0019c7ae0) Create stream I0408 00:33:14.478488 7 log.go:172] (0xc002d90840) (0xc0019c7ae0) Stream added, broadcasting: 3 I0408 00:33:14.479318 7 log.go:172] (0xc002d90840) Reply frame received for 3 I0408 00:33:14.479361 7 log.go:172] (0xc002d90840) (0xc0019c7d60) Create stream I0408 00:33:14.479385 7 log.go:172] (0xc002d90840) (0xc0019c7d60) Stream added, broadcasting: 5 I0408 00:33:14.480132 7 log.go:172] (0xc002d90840) Reply frame received for 5 I0408 00:33:14.563473 7 log.go:172] (0xc002d90840) Data frame received for 3 I0408 00:33:14.563515 7 log.go:172] (0xc0019c7ae0) (3) Data frame handling I0408 00:33:14.563532 7 log.go:172] (0xc0019c7ae0) (3) Data frame sent I0408 00:33:14.563820 7 log.go:172] (0xc002d90840) Data frame received for 5 I0408 00:33:14.563835 7 log.go:172] (0xc0019c7d60) (5) Data frame handling I0408 00:33:14.563897 7 log.go:172] (0xc002d90840) Data frame received for 3 I0408 00:33:14.563922 7 log.go:172] (0xc0019c7ae0) (3) Data frame handling I0408 00:33:14.565609 7 log.go:172] (0xc002d90840) Data frame received for 1 I0408 00:33:14.565634 7 log.go:172] (0xc002a2ad20) (1) Data frame handling I0408 00:33:14.565649 7 log.go:172] (0xc002a2ad20) (1) Data frame sent I0408 00:33:14.565658 7 log.go:172] (0xc002d90840) (0xc002a2ad20) Stream removed, broadcasting: 1 I0408 00:33:14.565734 7 log.go:172] (0xc002d90840) Go away received I0408 00:33:14.565774 7 log.go:172] (0xc002d90840) (0xc002a2ad20) Stream removed, broadcasting: 1 I0408 00:33:14.565790 7 log.go:172] (0xc002d90840) (0xc0019c7ae0) Stream removed, broadcasting: 3 I0408 00:33:14.565800 7 log.go:172] (0xc002d90840) (0xc0019c7d60) Stream removed, broadcasting: 5 Apr 8 00:33:14.565: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:33:14.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7025" for this suite. • [SLOW TEST:26.522 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":185,"skipped":3258,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:33:14.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 8 00:33:14.956: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 8 00:33:16.964: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721902794, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721902794, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721902794, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721902794, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 8 00:33:19.990: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:33:20.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7857" for this suite. STEP: Destroying namespace "webhook-7857-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.311 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":186,"skipped":3270,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:33:20.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-projected-all-test-volume-8afb5b58-23c5-44c9-a2af-0775344e2748 STEP: Creating secret with name secret-projected-all-test-volume-2dbbe1fa-6814-4ddb-9c2d-7ec26c5d1e26 STEP: Creating a pod to test Check all projections for projected volume plugin Apr 8 00:33:21.236: INFO: Waiting up to 5m0s for pod "projected-volume-085d049a-37a1-417d-a1f9-4fc525863e43" in namespace "projected-5013" to be "Succeeded or Failed" Apr 8 00:33:21.281: INFO: Pod "projected-volume-085d049a-37a1-417d-a1f9-4fc525863e43": Phase="Pending", Reason="", readiness=false. Elapsed: 45.35215ms Apr 8 00:33:23.286: INFO: Pod "projected-volume-085d049a-37a1-417d-a1f9-4fc525863e43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049749558s Apr 8 00:33:25.294: INFO: Pod "projected-volume-085d049a-37a1-417d-a1f9-4fc525863e43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057869254s STEP: Saw pod success Apr 8 00:33:25.294: INFO: Pod "projected-volume-085d049a-37a1-417d-a1f9-4fc525863e43" satisfied condition "Succeeded or Failed" Apr 8 00:33:25.298: INFO: Trying to get logs from node latest-worker2 pod projected-volume-085d049a-37a1-417d-a1f9-4fc525863e43 container projected-all-volume-test: STEP: delete the pod Apr 8 00:33:25.335: INFO: Waiting for pod projected-volume-085d049a-37a1-417d-a1f9-4fc525863e43 to disappear Apr 8 00:33:25.349: INFO: Pod projected-volume-085d049a-37a1-417d-a1f9-4fc525863e43 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:33:25.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5013" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":187,"skipped":3315,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:33:25.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:33:29.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1653" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":188,"skipped":3317,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:33:29.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 8 00:33:33.586: INFO: Waiting up to 5m0s for pod "client-envvars-cc4833b0-d4ce-4ba5-b0e2-02026dc35f97" in namespace "pods-8508" to be "Succeeded or Failed" Apr 8 00:33:33.600: INFO: Pod "client-envvars-cc4833b0-d4ce-4ba5-b0e2-02026dc35f97": Phase="Pending", Reason="", readiness=false. Elapsed: 14.20969ms Apr 8 00:33:35.663: INFO: Pod "client-envvars-cc4833b0-d4ce-4ba5-b0e2-02026dc35f97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077679102s Apr 8 00:33:37.667: INFO: Pod "client-envvars-cc4833b0-d4ce-4ba5-b0e2-02026dc35f97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.081201896s STEP: Saw pod success Apr 8 00:33:37.667: INFO: Pod "client-envvars-cc4833b0-d4ce-4ba5-b0e2-02026dc35f97" satisfied condition "Succeeded or Failed" Apr 8 00:33:37.670: INFO: Trying to get logs from node latest-worker pod client-envvars-cc4833b0-d4ce-4ba5-b0e2-02026dc35f97 container env3cont: STEP: delete the pod Apr 8 00:33:37.688: INFO: Waiting for pod client-envvars-cc4833b0-d4ce-4ba5-b0e2-02026dc35f97 to disappear Apr 8 00:33:37.694: INFO: Pod client-envvars-cc4833b0-d4ce-4ba5-b0e2-02026dc35f97 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:33:37.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8508" for this suite. • [SLOW TEST:8.244 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":189,"skipped":3329,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:33:37.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:33:37.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3156" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":190,"skipped":3445,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:33:37.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 8 00:33:37.990: INFO: Waiting up to 5m0s for pod "pod-b6a7774a-a3e7-4ae9-a6de-ca30fdbd76a4" in namespace "emptydir-1607" to be "Succeeded or Failed" Apr 8 00:33:37.999: INFO: Pod "pod-b6a7774a-a3e7-4ae9-a6de-ca30fdbd76a4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.73677ms Apr 8 00:33:40.002: INFO: Pod "pod-b6a7774a-a3e7-4ae9-a6de-ca30fdbd76a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011607017s Apr 8 00:33:42.006: INFO: Pod "pod-b6a7774a-a3e7-4ae9-a6de-ca30fdbd76a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015480908s STEP: Saw pod success Apr 8 00:33:42.006: INFO: Pod "pod-b6a7774a-a3e7-4ae9-a6de-ca30fdbd76a4" satisfied condition "Succeeded or Failed" Apr 8 00:33:42.009: INFO: Trying to get logs from node latest-worker pod pod-b6a7774a-a3e7-4ae9-a6de-ca30fdbd76a4 container test-container: STEP: delete the pod Apr 8 00:33:42.022: INFO: Waiting for pod pod-b6a7774a-a3e7-4ae9-a6de-ca30fdbd76a4 to disappear Apr 8 00:33:42.041: INFO: Pod pod-b6a7774a-a3e7-4ae9-a6de-ca30fdbd76a4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:33:42.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1607" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":191,"skipped":3447,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:33:42.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:33:48.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7689" for this suite. STEP: Destroying namespace "nsdeletetest-6432" for this suite. Apr 8 00:33:48.301: INFO: Namespace nsdeletetest-6432 was already deleted STEP: Destroying namespace "nsdeletetest-2163" for this suite. • [SLOW TEST:6.255 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":192,"skipped":3456,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:33:48.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 8 00:33:49.009: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 8 00:33:51.025: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721902829, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721902829, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721902829, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721902828, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 8 00:33:54.042: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:33:54.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3264" for this suite. STEP: Destroying namespace "webhook-3264-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.262 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":193,"skipped":3460,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:33:54.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 8 00:33:54.651: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-4197 I0408 00:33:54.661350 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-4197, replica count: 1 I0408 00:33:55.711817 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0408 00:33:56.712038 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0408 00:33:57.712257 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 8 00:33:57.830: INFO: Created: latency-svc-4fg6r Apr 8 00:33:57.837: INFO: Got endpoints: latency-svc-4fg6r [25.340267ms] Apr 8 00:33:57.890: INFO: Created: latency-svc-gvb8l Apr 8 00:33:57.914: INFO: Got endpoints: latency-svc-gvb8l [77.04924ms] Apr 8 00:33:57.945: INFO: Created: latency-svc-pg89d Apr 8 00:33:57.958: INFO: Got endpoints: latency-svc-pg89d [120.305695ms] Apr 8 00:33:58.025: INFO: Created: latency-svc-tvw4v Apr 8 00:33:58.045: INFO: Created: latency-svc-wcx5k Apr 8 00:33:58.045: INFO: Got endpoints: latency-svc-tvw4v [207.210341ms] Apr 8 00:33:58.054: INFO: Got endpoints: latency-svc-wcx5k [216.129548ms] Apr 8 00:33:58.069: INFO: Created: latency-svc-9g5gw Apr 8 00:33:58.078: INFO: Got endpoints: latency-svc-9g5gw [240.698286ms] Apr 8 00:33:58.094: INFO: Created: latency-svc-w78d9 Apr 8 00:33:58.108: INFO: Got endpoints: latency-svc-w78d9 [270.093991ms] Apr 8 00:33:58.124: INFO: Created: latency-svc-4dtmt Apr 8 00:33:58.168: INFO: Got endpoints: latency-svc-4dtmt [330.616172ms] Apr 8 00:33:58.182: INFO: Created: latency-svc-b428f Apr 8 00:33:58.199: INFO: Got endpoints: latency-svc-b428f [360.893071ms] Apr 8 00:33:58.225: INFO: Created: latency-svc-qh2k5 Apr 8 00:33:58.236: INFO: Got endpoints: latency-svc-qh2k5 [399.024196ms] Apr 8 00:33:58.255: INFO: Created: latency-svc-nx9m5 Apr 8 00:33:58.267: INFO: Got endpoints: latency-svc-nx9m5 [428.999213ms] Apr 8 00:33:58.307: INFO: Created: latency-svc-48z6s Apr 8 00:33:58.321: INFO: Got endpoints: latency-svc-48z6s [483.731074ms] Apr 8 00:33:58.335: INFO: Created: latency-svc-nssz2 Apr 8 00:33:58.371: INFO: Got endpoints: latency-svc-nssz2 [532.74105ms] Apr 8 00:33:58.432: INFO: Created: latency-svc-s759k Apr 8 00:33:58.459: INFO: Got endpoints: latency-svc-s759k [621.497525ms] Apr 8 00:33:58.460: INFO: Created: latency-svc-x48r2 Apr 8 00:33:58.489: INFO: Got endpoints: latency-svc-x48r2 [650.828788ms] Apr 8 00:33:58.508: INFO: Created: latency-svc-mp2t5 Apr 8 00:33:58.525: INFO: Got endpoints: latency-svc-mp2t5 [686.969136ms] Apr 8 00:33:58.563: INFO: Created: latency-svc-rbdjk Apr 8 00:33:58.572: INFO: Got endpoints: latency-svc-rbdjk [657.669259ms] Apr 8 00:33:58.604: INFO: Created: latency-svc-r7lc5 Apr 8 00:33:58.623: INFO: Got endpoints: latency-svc-r7lc5 [664.963893ms] Apr 8 00:33:58.639: INFO: Created: latency-svc-qndv9 Apr 8 00:33:58.653: INFO: Got endpoints: latency-svc-qndv9 [607.966829ms] Apr 8 00:33:58.695: INFO: Created: latency-svc-c7dxx Apr 8 00:33:58.711: INFO: Created: latency-svc-99dl7 Apr 8 00:33:58.712: INFO: Got endpoints: latency-svc-c7dxx [657.865651ms] Apr 8 00:33:58.725: INFO: Got endpoints: latency-svc-99dl7 [646.863474ms] Apr 8 00:33:58.743: INFO: Created: latency-svc-pbvld Apr 8 00:33:58.755: INFO: Got endpoints: latency-svc-pbvld [647.118967ms] Apr 8 00:33:58.771: INFO: Created: latency-svc-scxx9 Apr 8 00:33:58.790: INFO: Got endpoints: latency-svc-scxx9 [622.309942ms] Apr 8 00:33:58.839: INFO: Created: latency-svc-b5tlt Apr 8 00:33:58.855: INFO: Got endpoints: latency-svc-b5tlt [656.306854ms] Apr 8 00:33:58.887: INFO: Created: latency-svc-52vsr Apr 8 00:33:58.902: INFO: Got endpoints: latency-svc-52vsr [665.675976ms] Apr 8 00:33:58.916: INFO: Created: latency-svc-tzc5h Apr 8 00:33:58.932: INFO: Got endpoints: latency-svc-tzc5h [665.052951ms] Apr 8 00:33:58.965: INFO: Created: latency-svc-f49mx Apr 8 00:33:58.981: INFO: Created: latency-svc-7gzdx Apr 8 00:33:58.981: INFO: Got endpoints: latency-svc-f49mx [659.740717ms] Apr 8 00:33:59.017: INFO: Got endpoints: latency-svc-7gzdx [646.424233ms] Apr 8 00:33:59.049: INFO: Created: latency-svc-tthxx Apr 8 00:33:59.064: INFO: Got endpoints: latency-svc-tthxx [604.877597ms] Apr 8 00:33:59.115: INFO: Created: latency-svc-nhmk7 Apr 8 00:33:59.133: INFO: Got endpoints: latency-svc-nhmk7 [644.112236ms] Apr 8 00:33:59.134: INFO: Created: latency-svc-ggt5k Apr 8 00:33:59.142: INFO: Got endpoints: latency-svc-ggt5k [617.229271ms] Apr 8 00:33:59.161: INFO: Created: latency-svc-8hn8p Apr 8 00:33:59.174: INFO: Got endpoints: latency-svc-8hn8p [601.851178ms] Apr 8 00:33:59.191: INFO: Created: latency-svc-rlcv5 Apr 8 00:33:59.204: INFO: Got endpoints: latency-svc-rlcv5 [581.289917ms] Apr 8 00:33:59.264: INFO: Created: latency-svc-t9jnn Apr 8 00:33:59.301: INFO: Created: latency-svc-hc4jb Apr 8 00:33:59.301: INFO: Got endpoints: latency-svc-t9jnn [648.212361ms] Apr 8 00:33:59.318: INFO: Got endpoints: latency-svc-hc4jb [606.526176ms] Apr 8 00:33:59.343: INFO: Created: latency-svc-9ccm9 Apr 8 00:33:59.360: INFO: Got endpoints: latency-svc-9ccm9 [635.090749ms] Apr 8 00:33:59.397: INFO: Created: latency-svc-ljppd Apr 8 00:33:59.402: INFO: Got endpoints: latency-svc-ljppd [646.877567ms] Apr 8 00:33:59.419: INFO: Created: latency-svc-4cpxc Apr 8 00:33:59.444: INFO: Got endpoints: latency-svc-4cpxc [653.305381ms] Apr 8 00:33:59.469: INFO: Created: latency-svc-54n6t Apr 8 00:33:59.483: INFO: Got endpoints: latency-svc-54n6t [627.98802ms] Apr 8 00:33:59.549: INFO: Created: latency-svc-m84xd Apr 8 00:33:59.565: INFO: Created: latency-svc-rsm6w Apr 8 00:33:59.565: INFO: Got endpoints: latency-svc-m84xd [662.837902ms] Apr 8 00:33:59.591: INFO: Got endpoints: latency-svc-rsm6w [659.017993ms] Apr 8 00:33:59.611: INFO: Created: latency-svc-7q9wp Apr 8 00:33:59.631: INFO: Got endpoints: latency-svc-7q9wp [649.348616ms] Apr 8 00:33:59.671: INFO: Created: latency-svc-gngn9 Apr 8 00:33:59.675: INFO: Got endpoints: latency-svc-gngn9 [658.060012ms] Apr 8 00:33:59.703: INFO: Created: latency-svc-4gwgg Apr 8 00:33:59.717: INFO: Got endpoints: latency-svc-4gwgg [653.228104ms] Apr 8 00:33:59.738: INFO: Created: latency-svc-j68pj Apr 8 00:33:59.756: INFO: Got endpoints: latency-svc-j68pj [623.481543ms] Apr 8 00:33:59.845: INFO: Created: latency-svc-zbxgl Apr 8 00:33:59.857: INFO: Got endpoints: latency-svc-zbxgl [715.056493ms] Apr 8 00:33:59.887: INFO: Created: latency-svc-2tttw Apr 8 00:33:59.899: INFO: Got endpoints: latency-svc-2tttw [725.011932ms] Apr 8 00:33:59.947: INFO: Created: latency-svc-9qkkz Apr 8 00:33:59.973: INFO: Created: latency-svc-bwvv8 Apr 8 00:33:59.973: INFO: Got endpoints: latency-svc-9qkkz [768.572664ms] Apr 8 00:34:00.003: INFO: Got endpoints: latency-svc-bwvv8 [701.250923ms] Apr 8 00:34:00.037: INFO: Created: latency-svc-kd97g Apr 8 00:34:00.060: INFO: Got endpoints: latency-svc-kd97g [741.999652ms] Apr 8 00:34:00.079: INFO: Created: latency-svc-dvzv6 Apr 8 00:34:00.090: INFO: Got endpoints: latency-svc-dvzv6 [730.278577ms] Apr 8 00:34:00.135: INFO: Created: latency-svc-97hzj Apr 8 00:34:00.151: INFO: Got endpoints: latency-svc-97hzj [748.690981ms] Apr 8 00:34:00.192: INFO: Created: latency-svc-l6d67 Apr 8 00:34:00.213: INFO: Created: latency-svc-xj94r Apr 8 00:34:00.213: INFO: Got endpoints: latency-svc-l6d67 [769.204705ms] Apr 8 00:34:00.227: INFO: Got endpoints: latency-svc-xj94r [743.568185ms] Apr 8 00:34:00.289: INFO: Created: latency-svc-t7cl9 Apr 8 00:34:00.324: INFO: Got endpoints: latency-svc-t7cl9 [758.711749ms] Apr 8 00:34:00.343: INFO: Created: latency-svc-shp2m Apr 8 00:34:00.359: INFO: Got endpoints: latency-svc-shp2m [767.908073ms] Apr 8 00:34:00.399: INFO: Created: latency-svc-zljz4 Apr 8 00:34:00.462: INFO: Got endpoints: latency-svc-zljz4 [831.182966ms] Apr 8 00:34:00.481: INFO: Created: latency-svc-84j2h Apr 8 00:34:00.495: INFO: Got endpoints: latency-svc-84j2h [820.342597ms] Apr 8 00:34:00.523: INFO: Created: latency-svc-vdvql Apr 8 00:34:00.537: INFO: Got endpoints: latency-svc-vdvql [820.060118ms] Apr 8 00:34:00.606: INFO: Created: latency-svc-s4vjc Apr 8 00:34:00.611: INFO: Got endpoints: latency-svc-s4vjc [854.92018ms] Apr 8 00:34:00.662: INFO: Created: latency-svc-sc7rq Apr 8 00:34:00.678: INFO: Got endpoints: latency-svc-sc7rq [821.441535ms] Apr 8 00:34:00.703: INFO: Created: latency-svc-j5ds2 Apr 8 00:34:00.731: INFO: Got endpoints: latency-svc-j5ds2 [831.992957ms] Apr 8 00:34:00.760: INFO: Created: latency-svc-d5tvg Apr 8 00:34:00.768: INFO: Got endpoints: latency-svc-d5tvg [794.643554ms] Apr 8 00:34:00.794: INFO: Created: latency-svc-xghtj Apr 8 00:34:00.810: INFO: Got endpoints: latency-svc-xghtj [807.099527ms] Apr 8 00:34:00.883: INFO: Created: latency-svc-7jjld Apr 8 00:34:00.906: INFO: Got endpoints: latency-svc-7jjld [845.695072ms] Apr 8 00:34:00.931: INFO: Created: latency-svc-zg5p8 Apr 8 00:34:00.951: INFO: Got endpoints: latency-svc-zg5p8 [860.889057ms] Apr 8 00:34:01.013: INFO: Created: latency-svc-qljkw Apr 8 00:34:01.028: INFO: Got endpoints: latency-svc-qljkw [877.594026ms] Apr 8 00:34:01.029: INFO: Created: latency-svc-4lqdx Apr 8 00:34:01.040: INFO: Got endpoints: latency-svc-4lqdx [827.33717ms] Apr 8 00:34:01.064: INFO: Created: latency-svc-cqmm2 Apr 8 00:34:01.077: INFO: Got endpoints: latency-svc-cqmm2 [850.276098ms] Apr 8 00:34:01.100: INFO: Created: latency-svc-xg6j2 Apr 8 00:34:01.156: INFO: Got endpoints: latency-svc-xg6j2 [832.384815ms] Apr 8 00:34:01.183: INFO: Created: latency-svc-8mf6n Apr 8 00:34:01.197: INFO: Got endpoints: latency-svc-8mf6n [837.564681ms] Apr 8 00:34:01.220: INFO: Created: latency-svc-q76d8 Apr 8 00:34:01.239: INFO: Got endpoints: latency-svc-q76d8 [776.755632ms] Apr 8 00:34:01.304: INFO: Created: latency-svc-sm4gm Apr 8 00:34:01.347: INFO: Got endpoints: latency-svc-sm4gm [851.333125ms] Apr 8 00:34:01.402: INFO: Created: latency-svc-j78rg Apr 8 00:34:01.423: INFO: Created: latency-svc-wf9sz Apr 8 00:34:01.423: INFO: Got endpoints: latency-svc-j78rg [886.117478ms] Apr 8 00:34:01.438: INFO: Got endpoints: latency-svc-wf9sz [826.843113ms] Apr 8 00:34:01.484: INFO: Created: latency-svc-gg7jb Apr 8 00:34:01.499: INFO: Got endpoints: latency-svc-gg7jb [820.277805ms] Apr 8 00:34:01.556: INFO: Created: latency-svc-lcl4l Apr 8 00:34:01.570: INFO: Got endpoints: latency-svc-lcl4l [838.897125ms] Apr 8 00:34:01.596: INFO: Created: latency-svc-m7tvk Apr 8 00:34:01.607: INFO: Got endpoints: latency-svc-m7tvk [839.176716ms] Apr 8 00:34:01.626: INFO: Created: latency-svc-v8mr2 Apr 8 00:34:01.678: INFO: Got endpoints: latency-svc-v8mr2 [867.715351ms] Apr 8 00:34:01.707: INFO: Created: latency-svc-dcmw4 Apr 8 00:34:01.718: INFO: Got endpoints: latency-svc-dcmw4 [811.654815ms] Apr 8 00:34:01.742: INFO: Created: latency-svc-qxxlz Apr 8 00:34:01.754: INFO: Got endpoints: latency-svc-qxxlz [802.377617ms] Apr 8 00:34:01.803: INFO: Created: latency-svc-nhhkc Apr 8 00:34:01.812: INFO: Got endpoints: latency-svc-nhhkc [784.217169ms] Apr 8 00:34:01.843: INFO: Created: latency-svc-sscl8 Apr 8 00:34:01.856: INFO: Got endpoints: latency-svc-sscl8 [815.279923ms] Apr 8 00:34:01.892: INFO: Created: latency-svc-vhhzc Apr 8 00:34:01.922: INFO: Got endpoints: latency-svc-vhhzc [845.597376ms] Apr 8 00:34:01.958: INFO: Created: latency-svc-v78dv Apr 8 00:34:01.969: INFO: Got endpoints: latency-svc-v78dv [812.939738ms] Apr 8 00:34:01.993: INFO: Created: latency-svc-fxhvl Apr 8 00:34:02.005: INFO: Got endpoints: latency-svc-fxhvl [808.87722ms] Apr 8 00:34:02.055: INFO: Created: latency-svc-klqbm Apr 8 00:34:02.077: INFO: Got endpoints: latency-svc-klqbm [838.576396ms] Apr 8 00:34:02.077: INFO: Created: latency-svc-9x446 Apr 8 00:34:02.085: INFO: Got endpoints: latency-svc-9x446 [738.451764ms] Apr 8 00:34:02.101: INFO: Created: latency-svc-qkwws Apr 8 00:34:02.110: INFO: Got endpoints: latency-svc-qkwws [686.437326ms] Apr 8 00:34:02.126: INFO: Created: latency-svc-69zxh Apr 8 00:34:02.140: INFO: Got endpoints: latency-svc-69zxh [701.64141ms] Apr 8 00:34:02.186: INFO: Created: latency-svc-cl58j Apr 8 00:34:02.204: INFO: Created: latency-svc-lpk5s Apr 8 00:34:02.204: INFO: Got endpoints: latency-svc-cl58j [705.641173ms] Apr 8 00:34:02.217: INFO: Got endpoints: latency-svc-lpk5s [647.208184ms] Apr 8 00:34:02.239: INFO: Created: latency-svc-g4cwq Apr 8 00:34:02.254: INFO: Got endpoints: latency-svc-g4cwq [646.875377ms] Apr 8 00:34:02.275: INFO: Created: latency-svc-xbclm Apr 8 00:34:02.348: INFO: Got endpoints: latency-svc-xbclm [670.810185ms] Apr 8 00:34:02.350: INFO: Created: latency-svc-lfd6p Apr 8 00:34:02.371: INFO: Got endpoints: latency-svc-lfd6p [653.558662ms] Apr 8 00:34:02.403: INFO: Created: latency-svc-6jhps Apr 8 00:34:02.424: INFO: Got endpoints: latency-svc-6jhps [670.460333ms] Apr 8 00:34:02.480: INFO: Created: latency-svc-v2lb4 Apr 8 00:34:02.485: INFO: Got endpoints: latency-svc-v2lb4 [672.24377ms] Apr 8 00:34:02.509: INFO: Created: latency-svc-pqz6m Apr 8 00:34:02.521: INFO: Got endpoints: latency-svc-pqz6m [664.951403ms] Apr 8 00:34:02.547: INFO: Created: latency-svc-45zs2 Apr 8 00:34:02.563: INFO: Got endpoints: latency-svc-45zs2 [640.373079ms] Apr 8 00:34:02.617: INFO: Created: latency-svc-rhfz6 Apr 8 00:34:02.653: INFO: Got endpoints: latency-svc-rhfz6 [683.395201ms] Apr 8 00:34:02.653: INFO: Created: latency-svc-6cnkw Apr 8 00:34:02.667: INFO: Got endpoints: latency-svc-6cnkw [661.138134ms] Apr 8 00:34:02.682: INFO: Created: latency-svc-k2lgn Apr 8 00:34:02.697: INFO: Got endpoints: latency-svc-k2lgn [619.477926ms] Apr 8 00:34:02.713: INFO: Created: latency-svc-fkvrx Apr 8 00:34:02.767: INFO: Got endpoints: latency-svc-fkvrx [681.791661ms] Apr 8 00:34:02.770: INFO: Created: latency-svc-5dk6j Apr 8 00:34:02.774: INFO: Got endpoints: latency-svc-5dk6j [664.170541ms] Apr 8 00:34:02.804: INFO: Created: latency-svc-5rntt Apr 8 00:34:02.829: INFO: Got endpoints: latency-svc-5rntt [688.847277ms] Apr 8 00:34:02.846: INFO: Created: latency-svc-ll2xj Apr 8 00:34:02.859: INFO: Got endpoints: latency-svc-ll2xj [654.287687ms] Apr 8 00:34:02.905: INFO: Created: latency-svc-kldjg Apr 8 00:34:02.923: INFO: Created: latency-svc-nprbm Apr 8 00:34:02.923: INFO: Got endpoints: latency-svc-kldjg [705.22307ms] Apr 8 00:34:02.940: INFO: Got endpoints: latency-svc-nprbm [686.638211ms] Apr 8 00:34:02.959: INFO: Created: latency-svc-zhclj Apr 8 00:34:02.983: INFO: Got endpoints: latency-svc-zhclj [634.071507ms] Apr 8 00:34:03.037: INFO: Created: latency-svc-vjctb Apr 8 00:34:03.056: INFO: Got endpoints: latency-svc-vjctb [684.241013ms] Apr 8 00:34:03.057: INFO: Created: latency-svc-rwf2b Apr 8 00:34:03.078: INFO: Got endpoints: latency-svc-rwf2b [653.95756ms] Apr 8 00:34:03.109: INFO: Created: latency-svc-6phvp Apr 8 00:34:03.126: INFO: Got endpoints: latency-svc-6phvp [641.492337ms] Apr 8 00:34:03.162: INFO: Created: latency-svc-xl66b Apr 8 00:34:03.168: INFO: Got endpoints: latency-svc-xl66b [647.115653ms] Apr 8 00:34:03.194: INFO: Created: latency-svc-cjdzg Apr 8 00:34:03.206: INFO: Got endpoints: latency-svc-cjdzg [643.047922ms] Apr 8 00:34:03.224: INFO: Created: latency-svc-f26qs Apr 8 00:34:03.236: INFO: Got endpoints: latency-svc-f26qs [582.740701ms] Apr 8 00:34:03.260: INFO: Created: latency-svc-8l84d Apr 8 00:34:03.288: INFO: Got endpoints: latency-svc-8l84d [621.217501ms] Apr 8 00:34:03.300: INFO: Created: latency-svc-2g46s Apr 8 00:34:03.320: INFO: Got endpoints: latency-svc-2g46s [622.89382ms] Apr 8 00:34:03.336: INFO: Created: latency-svc-g6pdv Apr 8 00:34:03.368: INFO: Got endpoints: latency-svc-g6pdv [600.741085ms] Apr 8 00:34:03.426: INFO: Created: latency-svc-7mkzd Apr 8 00:34:03.430: INFO: Got endpoints: latency-svc-7mkzd [655.493179ms] Apr 8 00:34:03.458: INFO: Created: latency-svc-8n7sk Apr 8 00:34:03.470: INFO: Got endpoints: latency-svc-8n7sk [640.895608ms] Apr 8 00:34:03.489: INFO: Created: latency-svc-pjr9g Apr 8 00:34:03.498: INFO: Got endpoints: latency-svc-pjr9g [638.821571ms] Apr 8 00:34:03.516: INFO: Created: latency-svc-8gpbl Apr 8 00:34:03.545: INFO: Got endpoints: latency-svc-8gpbl [622.497331ms] Apr 8 00:34:03.558: INFO: Created: latency-svc-mj9gl Apr 8 00:34:03.588: INFO: Got endpoints: latency-svc-mj9gl [647.969805ms] Apr 8 00:34:03.620: INFO: Created: latency-svc-k8n7r Apr 8 00:34:03.635: INFO: Got endpoints: latency-svc-k8n7r [652.660831ms] Apr 8 00:34:03.671: INFO: Created: latency-svc-bzb7h Apr 8 00:34:03.692: INFO: Created: latency-svc-n2fq8 Apr 8 00:34:03.692: INFO: Got endpoints: latency-svc-bzb7h [636.228ms] Apr 8 00:34:03.701: INFO: Got endpoints: latency-svc-n2fq8 [622.741837ms] Apr 8 00:34:03.714: INFO: Created: latency-svc-x9qd9 Apr 8 00:34:03.731: INFO: Got endpoints: latency-svc-x9qd9 [604.884168ms] Apr 8 00:34:03.750: INFO: Created: latency-svc-ntg57 Apr 8 00:34:03.763: INFO: Got endpoints: latency-svc-ntg57 [594.933075ms] Apr 8 00:34:03.799: INFO: Created: latency-svc-rldwt Apr 8 00:34:03.811: INFO: Got endpoints: latency-svc-rldwt [604.981871ms] Apr 8 00:34:03.829: INFO: Created: latency-svc-scgk5 Apr 8 00:34:03.847: INFO: Got endpoints: latency-svc-scgk5 [611.646972ms] Apr 8 00:34:03.884: INFO: Created: latency-svc-r6tzn Apr 8 00:34:03.929: INFO: Got endpoints: latency-svc-r6tzn [640.584102ms] Apr 8 00:34:03.942: INFO: Created: latency-svc-gjt6t Apr 8 00:34:03.967: INFO: Got endpoints: latency-svc-gjt6t [646.675026ms] Apr 8 00:34:03.991: INFO: Created: latency-svc-n9226 Apr 8 00:34:04.003: INFO: Got endpoints: latency-svc-n9226 [634.756528ms] Apr 8 00:34:04.020: INFO: Created: latency-svc-mkdd5 Apr 8 00:34:04.072: INFO: Got endpoints: latency-svc-mkdd5 [642.462791ms] Apr 8 00:34:04.074: INFO: Created: latency-svc-xnrlb Apr 8 00:34:04.094: INFO: Got endpoints: latency-svc-xnrlb [623.874755ms] Apr 8 00:34:04.112: INFO: Created: latency-svc-fggx2 Apr 8 00:34:04.120: INFO: Got endpoints: latency-svc-fggx2 [622.785515ms] Apr 8 00:34:04.140: INFO: Created: latency-svc-zwlfk Apr 8 00:34:04.157: INFO: Got endpoints: latency-svc-zwlfk [611.258054ms] Apr 8 00:34:04.210: INFO: Created: latency-svc-7ljfp Apr 8 00:34:04.231: INFO: Created: latency-svc-bbmcl Apr 8 00:34:04.231: INFO: Got endpoints: latency-svc-7ljfp [642.087367ms] Apr 8 00:34:04.247: INFO: Got endpoints: latency-svc-bbmcl [611.554516ms] Apr 8 00:34:04.268: INFO: Created: latency-svc-x2nvf Apr 8 00:34:04.282: INFO: Got endpoints: latency-svc-x2nvf [590.310859ms] Apr 8 00:34:04.298: INFO: Created: latency-svc-v4vc8 Apr 8 00:34:04.306: INFO: Got endpoints: latency-svc-v4vc8 [605.088617ms] Apr 8 00:34:04.348: INFO: Created: latency-svc-86dg6 Apr 8 00:34:04.357: INFO: Got endpoints: latency-svc-86dg6 [625.574783ms] Apr 8 00:34:04.374: INFO: Created: latency-svc-jwc4l Apr 8 00:34:04.398: INFO: Got endpoints: latency-svc-jwc4l [635.394458ms] Apr 8 00:34:04.428: INFO: Created: latency-svc-thrc2 Apr 8 00:34:04.467: INFO: Got endpoints: latency-svc-thrc2 [656.340839ms] Apr 8 00:34:04.490: INFO: Created: latency-svc-6mfbn Apr 8 00:34:04.512: INFO: Got endpoints: latency-svc-6mfbn [664.965305ms] Apr 8 00:34:04.532: INFO: Created: latency-svc-zq2mk Apr 8 00:34:04.548: INFO: Got endpoints: latency-svc-zq2mk [619.543065ms] Apr 8 00:34:04.566: INFO: Created: latency-svc-krj8z Apr 8 00:34:04.623: INFO: Got endpoints: latency-svc-krj8z [656.879676ms] Apr 8 00:34:04.626: INFO: Created: latency-svc-jkb7c Apr 8 00:34:04.635: INFO: Got endpoints: latency-svc-jkb7c [632.626234ms] Apr 8 00:34:04.664: INFO: Created: latency-svc-xblgf Apr 8 00:34:04.678: INFO: Got endpoints: latency-svc-xblgf [605.69934ms] Apr 8 00:34:04.694: INFO: Created: latency-svc-9kwmr Apr 8 00:34:04.718: INFO: Got endpoints: latency-svc-9kwmr [624.277197ms] Apr 8 00:34:04.760: INFO: Created: latency-svc-khwx2 Apr 8 00:34:04.774: INFO: Got endpoints: latency-svc-khwx2 [653.488989ms] Apr 8 00:34:04.794: INFO: Created: latency-svc-vzxdb Apr 8 00:34:04.816: INFO: Got endpoints: latency-svc-vzxdb [659.035443ms] Apr 8 00:34:04.842: INFO: Created: latency-svc-5rzd9 Apr 8 00:34:04.905: INFO: Got endpoints: latency-svc-5rzd9 [673.921493ms] Apr 8 00:34:05.098: INFO: Created: latency-svc-5dzbc Apr 8 00:34:05.162: INFO: Got endpoints: latency-svc-5dzbc [915.327203ms] Apr 8 00:34:05.164: INFO: Created: latency-svc-mvfdx Apr 8 00:34:05.183: INFO: Got endpoints: latency-svc-mvfdx [900.689538ms] Apr 8 00:34:05.252: INFO: Created: latency-svc-nzvf4 Apr 8 00:34:05.267: INFO: Got endpoints: latency-svc-nzvf4 [960.74233ms] Apr 8 00:34:05.288: INFO: Created: latency-svc-2vrnt Apr 8 00:34:05.303: INFO: Got endpoints: latency-svc-2vrnt [945.883632ms] Apr 8 00:34:05.330: INFO: Created: latency-svc-bl7g5 Apr 8 00:34:05.360: INFO: Got endpoints: latency-svc-bl7g5 [961.135722ms] Apr 8 00:34:05.376: INFO: Created: latency-svc-z22qp Apr 8 00:34:05.402: INFO: Got endpoints: latency-svc-z22qp [934.575527ms] Apr 8 00:34:05.528: INFO: Created: latency-svc-7fbhq Apr 8 00:34:05.595: INFO: Got endpoints: latency-svc-7fbhq [1.082193724s] Apr 8 00:34:05.595: INFO: Created: latency-svc-tsgsv Apr 8 00:34:05.614: INFO: Got endpoints: latency-svc-tsgsv [1.06595008s] Apr 8 00:34:05.679: INFO: Created: latency-svc-ll6z6 Apr 8 00:34:05.714: INFO: Got endpoints: latency-svc-ll6z6 [1.091001818s] Apr 8 00:34:05.731: INFO: Created: latency-svc-gb99c Apr 8 00:34:05.744: INFO: Got endpoints: latency-svc-gb99c [1.108333555s] Apr 8 00:34:05.804: INFO: Created: latency-svc-zvj8j Apr 8 00:34:05.828: INFO: Got endpoints: latency-svc-zvj8j [1.150045271s] Apr 8 00:34:05.859: INFO: Created: latency-svc-ppd9g Apr 8 00:34:05.887: INFO: Got endpoints: latency-svc-ppd9g [1.168950705s] Apr 8 00:34:05.935: INFO: Created: latency-svc-9hz4l Apr 8 00:34:05.953: INFO: Created: latency-svc-nkd86 Apr 8 00:34:05.954: INFO: Got endpoints: latency-svc-9hz4l [1.179672709s] Apr 8 00:34:05.966: INFO: Got endpoints: latency-svc-nkd86 [1.150240763s] Apr 8 00:34:05.990: INFO: Created: latency-svc-gqzml Apr 8 00:34:06.015: INFO: Got endpoints: latency-svc-gqzml [1.110266464s] Apr 8 00:34:06.067: INFO: Created: latency-svc-hzk9b Apr 8 00:34:06.085: INFO: Created: latency-svc-hpd6t Apr 8 00:34:06.085: INFO: Got endpoints: latency-svc-hzk9b [923.098611ms] Apr 8 00:34:06.099: INFO: Got endpoints: latency-svc-hpd6t [916.27786ms] Apr 8 00:34:06.122: INFO: Created: latency-svc-b6cfd Apr 8 00:34:06.139: INFO: Got endpoints: latency-svc-b6cfd [872.018466ms] Apr 8 00:34:06.163: INFO: Created: latency-svc-ww966 Apr 8 00:34:06.192: INFO: Got endpoints: latency-svc-ww966 [889.457701ms] Apr 8 00:34:06.207: INFO: Created: latency-svc-rhrmt Apr 8 00:34:06.220: INFO: Got endpoints: latency-svc-rhrmt [860.123122ms] Apr 8 00:34:06.236: INFO: Created: latency-svc-m6dpm Apr 8 00:34:06.250: INFO: Got endpoints: latency-svc-m6dpm [847.584346ms] Apr 8 00:34:06.267: INFO: Created: latency-svc-2mlpb Apr 8 00:34:06.280: INFO: Got endpoints: latency-svc-2mlpb [684.977844ms] Apr 8 00:34:06.312: INFO: Created: latency-svc-s97jk Apr 8 00:34:06.331: INFO: Got endpoints: latency-svc-s97jk [716.943932ms] Apr 8 00:34:06.332: INFO: Created: latency-svc-h8wrm Apr 8 00:34:06.367: INFO: Got endpoints: latency-svc-h8wrm [652.517363ms] Apr 8 00:34:06.462: INFO: Created: latency-svc-zck9k Apr 8 00:34:06.482: INFO: Got endpoints: latency-svc-zck9k [738.157162ms] Apr 8 00:34:06.483: INFO: Created: latency-svc-5rkr9 Apr 8 00:34:06.493: INFO: Got endpoints: latency-svc-5rkr9 [664.964772ms] Apr 8 00:34:06.511: INFO: Created: latency-svc-drt47 Apr 8 00:34:06.535: INFO: Got endpoints: latency-svc-drt47 [647.703533ms] Apr 8 00:34:06.594: INFO: Created: latency-svc-x5hck Apr 8 00:34:06.614: INFO: Got endpoints: latency-svc-x5hck [660.235039ms] Apr 8 00:34:06.614: INFO: Created: latency-svc-d4lmw Apr 8 00:34:06.657: INFO: Got endpoints: latency-svc-d4lmw [691.133943ms] Apr 8 00:34:06.686: INFO: Created: latency-svc-5v6vx Apr 8 00:34:06.719: INFO: Got endpoints: latency-svc-5v6vx [704.120808ms] Apr 8 00:34:06.721: INFO: Created: latency-svc-2n9cx Apr 8 00:34:06.734: INFO: Got endpoints: latency-svc-2n9cx [648.993495ms] Apr 8 00:34:06.750: INFO: Created: latency-svc-v8q7t Apr 8 00:34:06.776: INFO: Got endpoints: latency-svc-v8q7t [676.734423ms] Apr 8 00:34:06.864: INFO: Created: latency-svc-4m78m Apr 8 00:34:06.873: INFO: Got endpoints: latency-svc-4m78m [733.409222ms] Apr 8 00:34:06.901: INFO: Created: latency-svc-km4lh Apr 8 00:34:06.920: INFO: Got endpoints: latency-svc-km4lh [728.043492ms] Apr 8 00:34:06.938: INFO: Created: latency-svc-xxqk6 Apr 8 00:34:06.963: INFO: Got endpoints: latency-svc-xxqk6 [742.720942ms] Apr 8 00:34:07.006: INFO: Created: latency-svc-b6tdh Apr 8 00:34:07.012: INFO: Got endpoints: latency-svc-b6tdh [761.994251ms] Apr 8 00:34:07.033: INFO: Created: latency-svc-tvk94 Apr 8 00:34:07.050: INFO: Got endpoints: latency-svc-tvk94 [770.667112ms] Apr 8 00:34:07.069: INFO: Created: latency-svc-2d7cz Apr 8 00:34:07.086: INFO: Got endpoints: latency-svc-2d7cz [754.87932ms] Apr 8 00:34:07.105: INFO: Created: latency-svc-b2mcb Apr 8 00:34:07.133: INFO: Got endpoints: latency-svc-b2mcb [765.423043ms] Apr 8 00:34:07.148: INFO: Created: latency-svc-l9jgp Apr 8 00:34:07.164: INFO: Got endpoints: latency-svc-l9jgp [681.708988ms] Apr 8 00:34:07.178: INFO: Created: latency-svc-s5h4k Apr 8 00:34:07.199: INFO: Got endpoints: latency-svc-s5h4k [705.757501ms] Apr 8 00:34:07.234: INFO: Created: latency-svc-7pz5b Apr 8 00:34:07.286: INFO: Got endpoints: latency-svc-7pz5b [751.429763ms] Apr 8 00:34:07.287: INFO: Created: latency-svc-5gm5n Apr 8 00:34:07.316: INFO: Got endpoints: latency-svc-5gm5n [702.257441ms] Apr 8 00:34:07.409: INFO: Created: latency-svc-7h5x5 Apr 8 00:34:07.443: INFO: Created: latency-svc-cc76v Apr 8 00:34:07.443: INFO: Got endpoints: latency-svc-7h5x5 [785.963197ms] Apr 8 00:34:07.460: INFO: Got endpoints: latency-svc-cc76v [740.733345ms] Apr 8 00:34:07.539: INFO: Created: latency-svc-74wxj Apr 8 00:34:07.561: INFO: Got endpoints: latency-svc-74wxj [826.80547ms] Apr 8 00:34:07.561: INFO: Created: latency-svc-gstjp Apr 8 00:34:07.574: INFO: Got endpoints: latency-svc-gstjp [797.363524ms] Apr 8 00:34:07.592: INFO: Created: latency-svc-qqg69 Apr 8 00:34:07.610: INFO: Got endpoints: latency-svc-qqg69 [736.984551ms] Apr 8 00:34:07.610: INFO: Latencies: [77.04924ms 120.305695ms 207.210341ms 216.129548ms 240.698286ms 270.093991ms 330.616172ms 360.893071ms 399.024196ms 428.999213ms 483.731074ms 532.74105ms 581.289917ms 582.740701ms 590.310859ms 594.933075ms 600.741085ms 601.851178ms 604.877597ms 604.884168ms 604.981871ms 605.088617ms 605.69934ms 606.526176ms 607.966829ms 611.258054ms 611.554516ms 611.646972ms 617.229271ms 619.477926ms 619.543065ms 621.217501ms 621.497525ms 622.309942ms 622.497331ms 622.741837ms 622.785515ms 622.89382ms 623.481543ms 623.874755ms 624.277197ms 625.574783ms 627.98802ms 632.626234ms 634.071507ms 634.756528ms 635.090749ms 635.394458ms 636.228ms 638.821571ms 640.373079ms 640.584102ms 640.895608ms 641.492337ms 642.087367ms 642.462791ms 643.047922ms 644.112236ms 646.424233ms 646.675026ms 646.863474ms 646.875377ms 646.877567ms 647.115653ms 647.118967ms 647.208184ms 647.703533ms 647.969805ms 648.212361ms 648.993495ms 649.348616ms 650.828788ms 652.517363ms 652.660831ms 653.228104ms 653.305381ms 653.488989ms 653.558662ms 653.95756ms 654.287687ms 655.493179ms 656.306854ms 656.340839ms 656.879676ms 657.669259ms 657.865651ms 658.060012ms 659.017993ms 659.035443ms 659.740717ms 660.235039ms 661.138134ms 662.837902ms 664.170541ms 664.951403ms 664.963893ms 664.964772ms 664.965305ms 665.052951ms 665.675976ms 670.460333ms 670.810185ms 672.24377ms 673.921493ms 676.734423ms 681.708988ms 681.791661ms 683.395201ms 684.241013ms 684.977844ms 686.437326ms 686.638211ms 686.969136ms 688.847277ms 691.133943ms 701.250923ms 701.64141ms 702.257441ms 704.120808ms 705.22307ms 705.641173ms 705.757501ms 715.056493ms 716.943932ms 725.011932ms 728.043492ms 730.278577ms 733.409222ms 736.984551ms 738.157162ms 738.451764ms 740.733345ms 741.999652ms 742.720942ms 743.568185ms 748.690981ms 751.429763ms 754.87932ms 758.711749ms 761.994251ms 765.423043ms 767.908073ms 768.572664ms 769.204705ms 770.667112ms 776.755632ms 784.217169ms 785.963197ms 794.643554ms 797.363524ms 802.377617ms 807.099527ms 808.87722ms 811.654815ms 812.939738ms 815.279923ms 820.060118ms 820.277805ms 820.342597ms 821.441535ms 826.80547ms 826.843113ms 827.33717ms 831.182966ms 831.992957ms 832.384815ms 837.564681ms 838.576396ms 838.897125ms 839.176716ms 845.597376ms 845.695072ms 847.584346ms 850.276098ms 851.333125ms 854.92018ms 860.123122ms 860.889057ms 867.715351ms 872.018466ms 877.594026ms 886.117478ms 889.457701ms 900.689538ms 915.327203ms 916.27786ms 923.098611ms 934.575527ms 945.883632ms 960.74233ms 961.135722ms 1.06595008s 1.082193724s 1.091001818s 1.108333555s 1.110266464s 1.150045271s 1.150240763s 1.168950705s 1.179672709s] Apr 8 00:34:07.610: INFO: 50 %ile: 670.460333ms Apr 8 00:34:07.610: INFO: 90 %ile: 877.594026ms Apr 8 00:34:07.610: INFO: 99 %ile: 1.168950705s Apr 8 00:34:07.610: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:34:07.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-4197" for this suite. • [SLOW TEST:13.093 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":275,"completed":194,"skipped":3474,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:34:07.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 8 00:34:11.728: INFO: &Pod{ObjectMeta:{send-events-4cee4d68-6364-4e2e-bfe2-ea6c8ae9c2c6 events-9043 /api/v1/namespaces/events-9043/pods/send-events-4cee4d68-6364-4e2e-bfe2-ea6c8ae9c2c6 a1226b7e-2ac9-487c-9826-3fba3aad75e1 6283982 0 2020-04-08 00:34:07 +0000 UTC map[name:foo time:695871607] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p9n7f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p9n7f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p9n7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 00:34:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 00:34:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 00:34:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 00:34:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.200,StartTime:2020-04-08 00:34:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-08 00:34:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://48287130c56c12219661b99e551ebf6b44fad8edb96b4b3ab23a8bd8cabc3e06,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.200,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Apr 8 00:34:13.743: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 8 00:34:15.747: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:34:15.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9043" for this suite. • [SLOW TEST:8.127 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":275,"completed":195,"skipped":3485,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:34:15.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:34:15.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1715" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":275,"completed":196,"skipped":3492,"failed":0} SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:34:15.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 8 00:34:16.028: INFO: PodSpec: initContainers in spec.initContainers Apr 8 00:35:03.638: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-b811324d-ed67-432d-92e2-c4399783302b", GenerateName:"", Namespace:"init-container-8184", SelfLink:"/api/v1/namespaces/init-container-8184/pods/pod-init-b811324d-ed67-432d-92e2-c4399783302b", UID:"c7f45b61-3ebf-4285-8d49-2b6de960815e", ResourceVersion:"6284896", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63721902856, loc:(*time.Location)(0x7b1e080)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"28686276"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-f2frk", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc006438540), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-f2frk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-f2frk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-f2frk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc005555b68), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002917880), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc005555bf0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc005555c10)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc005555c18), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc005555c1c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721902856, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721902856, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721902856, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721902856, loc:(*time.Location)(0x7b1e080)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.12", PodIP:"10.244.1.228", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.228"}}, StartTime:(*v1.Time)(0xc0036c7700), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002917960)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0029179d0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://62fcfcb58327fbf5c61f7be7c4256cce0103a519d3bb9fb22601feca16ed2086", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0036c7740), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0036c7720), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc005555c9f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:35:03.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8184" for this suite. • [SLOW TEST:47.738 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":197,"skipped":3499,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:35:03.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 8 00:35:03.748: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2494 /api/v1/namespaces/watch-2494/configmaps/e2e-watch-test-watch-closed 636c7054-b933-425a-bf53-6108e84c32eb 6284902 0 2020-04-08 00:35:03 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 8 00:35:03.748: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2494 /api/v1/namespaces/watch-2494/configmaps/e2e-watch-test-watch-closed 636c7054-b933-425a-bf53-6108e84c32eb 6284903 0 2020-04-08 00:35:03 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 8 00:35:03.765: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2494 /api/v1/namespaces/watch-2494/configmaps/e2e-watch-test-watch-closed 636c7054-b933-425a-bf53-6108e84c32eb 6284904 0 2020-04-08 00:35:03 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 8 00:35:03.765: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2494 /api/v1/namespaces/watch-2494/configmaps/e2e-watch-test-watch-closed 636c7054-b933-425a-bf53-6108e84c32eb 6284905 0 2020-04-08 00:35:03 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:35:03.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2494" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":198,"skipped":3506,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:35:03.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 8 00:35:03.831: INFO: Waiting up to 5m0s for pod "downwardapi-volume-96ac082d-b1e8-47eb-aa9c-aebcc90caeb2" in namespace "downward-api-3735" to be "Succeeded or Failed" Apr 8 00:35:03.843: INFO: Pod "downwardapi-volume-96ac082d-b1e8-47eb-aa9c-aebcc90caeb2": Phase="Pending", Reason="", readiness=false. Elapsed: 11.807086ms Apr 8 00:35:05.870: INFO: Pod "downwardapi-volume-96ac082d-b1e8-47eb-aa9c-aebcc90caeb2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038502103s Apr 8 00:35:07.874: INFO: Pod "downwardapi-volume-96ac082d-b1e8-47eb-aa9c-aebcc90caeb2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042928592s STEP: Saw pod success Apr 8 00:35:07.874: INFO: Pod "downwardapi-volume-96ac082d-b1e8-47eb-aa9c-aebcc90caeb2" satisfied condition "Succeeded or Failed" Apr 8 00:35:07.877: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-96ac082d-b1e8-47eb-aa9c-aebcc90caeb2 container client-container: STEP: delete the pod Apr 8 00:35:07.896: INFO: Waiting for pod downwardapi-volume-96ac082d-b1e8-47eb-aa9c-aebcc90caeb2 to disappear Apr 8 00:35:07.900: INFO: Pod downwardapi-volume-96ac082d-b1e8-47eb-aa9c-aebcc90caeb2 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:35:07.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3735" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":199,"skipped":3543,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:35:07.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:35:07.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9946" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":200,"skipped":3677,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:35:07.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 8 00:35:08.104: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 00:35:08.106: INFO: Number of nodes with available pods: 0 Apr 8 00:35:08.106: INFO: Node latest-worker is running more than one daemon pod Apr 8 00:35:09.111: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 00:35:09.114: INFO: Number of nodes with available pods: 0 Apr 8 00:35:09.114: INFO: Node latest-worker is running more than one daemon pod Apr 8 00:35:10.111: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 00:35:10.115: INFO: Number of nodes with available pods: 0 Apr 8 00:35:10.115: INFO: Node latest-worker is running more than one daemon pod Apr 8 00:35:11.112: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 00:35:11.116: INFO: Number of nodes with available pods: 0 Apr 8 00:35:11.116: INFO: Node latest-worker is running more than one daemon pod Apr 8 00:35:12.112: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 00:35:12.116: INFO: Number of nodes with available pods: 2 Apr 8 00:35:12.116: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 8 00:35:12.131: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 00:35:12.137: INFO: Number of nodes with available pods: 2 Apr 8 00:35:12.137: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4810, will wait for the garbage collector to delete the pods Apr 8 00:35:13.359: INFO: Deleting DaemonSet.extensions daemon-set took: 13.676497ms Apr 8 00:35:13.659: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.235449ms Apr 8 00:35:23.063: INFO: Number of nodes with available pods: 0 Apr 8 00:35:23.063: INFO: Number of running nodes: 0, number of available pods: 0 Apr 8 00:35:23.066: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4810/daemonsets","resourceVersion":"6285070"},"items":null} Apr 8 00:35:23.068: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4810/pods","resourceVersion":"6285070"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:35:23.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4810" for this suite. • [SLOW TEST:15.092 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":201,"skipped":3697,"failed":0} [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:35:23.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 8 00:35:23.159: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b6b67399-717d-4093-8d3b-5f7500c420a9" in namespace "downward-api-8831" to be "Succeeded or Failed" Apr 8 00:35:23.165: INFO: Pod "downwardapi-volume-b6b67399-717d-4093-8d3b-5f7500c420a9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.107867ms Apr 8 00:35:25.169: INFO: Pod "downwardapi-volume-b6b67399-717d-4093-8d3b-5f7500c420a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010000968s Apr 8 00:35:27.173: INFO: Pod "downwardapi-volume-b6b67399-717d-4093-8d3b-5f7500c420a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014371274s STEP: Saw pod success Apr 8 00:35:27.173: INFO: Pod "downwardapi-volume-b6b67399-717d-4093-8d3b-5f7500c420a9" satisfied condition "Succeeded or Failed" Apr 8 00:35:27.177: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-b6b67399-717d-4093-8d3b-5f7500c420a9 container client-container: STEP: delete the pod Apr 8 00:35:27.211: INFO: Waiting for pod downwardapi-volume-b6b67399-717d-4093-8d3b-5f7500c420a9 to disappear Apr 8 00:35:27.237: INFO: Pod downwardapi-volume-b6b67399-717d-4093-8d3b-5f7500c420a9 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:35:27.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8831" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":202,"skipped":3697,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:35:27.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-7f4f11be-ce8e-4bad-aa49-7073b6f17f25 STEP: Creating a pod to test consume configMaps Apr 8 00:35:27.324: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-744ef3f7-ead4-4330-9d78-74bd5c0880da" in namespace "projected-8341" to be "Succeeded or Failed" Apr 8 00:35:27.326: INFO: Pod "pod-projected-configmaps-744ef3f7-ead4-4330-9d78-74bd5c0880da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.537186ms Apr 8 00:35:29.330: INFO: Pod "pod-projected-configmaps-744ef3f7-ead4-4330-9d78-74bd5c0880da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006064043s Apr 8 00:35:31.334: INFO: Pod "pod-projected-configmaps-744ef3f7-ead4-4330-9d78-74bd5c0880da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010540672s STEP: Saw pod success Apr 8 00:35:31.334: INFO: Pod "pod-projected-configmaps-744ef3f7-ead4-4330-9d78-74bd5c0880da" satisfied condition "Succeeded or Failed" Apr 8 00:35:31.338: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-744ef3f7-ead4-4330-9d78-74bd5c0880da container projected-configmap-volume-test: STEP: delete the pod Apr 8 00:35:31.360: INFO: Waiting for pod pod-projected-configmaps-744ef3f7-ead4-4330-9d78-74bd5c0880da to disappear Apr 8 00:35:31.365: INFO: Pod pod-projected-configmaps-744ef3f7-ead4-4330-9d78-74bd5c0880da no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:35:31.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8341" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":203,"skipped":3704,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:35:31.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 8 00:35:35.990: INFO: Successfully updated pod "annotationupdateca57e7a2-1f48-464c-b993-eb0663c3378b" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:35:38.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6530" for this suite. • [SLOW TEST:6.646 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":204,"skipped":3706,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:35:38.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 8 00:35:38.129: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fa493a10-abad-474f-bbb5-4937f8b036e9" in namespace "downward-api-4335" to be "Succeeded or Failed" Apr 8 00:35:38.155: INFO: Pod "downwardapi-volume-fa493a10-abad-474f-bbb5-4937f8b036e9": Phase="Pending", Reason="", readiness=false. Elapsed: 25.714477ms Apr 8 00:35:40.159: INFO: Pod "downwardapi-volume-fa493a10-abad-474f-bbb5-4937f8b036e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029501037s Apr 8 00:35:42.163: INFO: Pod "downwardapi-volume-fa493a10-abad-474f-bbb5-4937f8b036e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033377705s STEP: Saw pod success Apr 8 00:35:42.163: INFO: Pod "downwardapi-volume-fa493a10-abad-474f-bbb5-4937f8b036e9" satisfied condition "Succeeded or Failed" Apr 8 00:35:42.166: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-fa493a10-abad-474f-bbb5-4937f8b036e9 container client-container: STEP: delete the pod Apr 8 00:35:42.185: INFO: Waiting for pod downwardapi-volume-fa493a10-abad-474f-bbb5-4937f8b036e9 to disappear Apr 8 00:35:42.192: INFO: Pod downwardapi-volume-fa493a10-abad-474f-bbb5-4937f8b036e9 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:35:42.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4335" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":205,"skipped":3732,"failed":0} SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:35:42.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-3910 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating statefulset ss in namespace statefulset-3910 Apr 8 00:35:42.327: INFO: Found 0 stateful pods, waiting for 1 Apr 8 00:35:52.331: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 8 00:35:52.353: INFO: Deleting all statefulset in ns statefulset-3910 Apr 8 00:35:52.601: INFO: Scaling statefulset ss to 0 Apr 8 00:36:12.695: INFO: Waiting for statefulset status.replicas updated to 0 Apr 8 00:36:12.698: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:36:12.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3910" for this suite. • [SLOW TEST:30.522 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":206,"skipped":3737,"failed":0} [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:36:12.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Starting the proxy Apr 8 00:36:12.791: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix224170894/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:36:12.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8884" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":275,"completed":207,"skipped":3737,"failed":0} SSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:36:12.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token STEP: reading a file in the container Apr 8 00:36:17.469: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7545 pod-service-account-3bda7dd7-7239-4007-a0d4-b2330afd200d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 8 00:36:20.170: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7545 pod-service-account-3bda7dd7-7239-4007-a0d4-b2330afd200d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 8 00:36:20.379: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7545 pod-service-account-3bda7dd7-7239-4007-a0d4-b2330afd200d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:36:20.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7545" for this suite. • [SLOW TEST:7.715 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":275,"completed":208,"skipped":3741,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:36:20.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Apr 8 00:36:20.651: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:36:37.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2974" for this suite. • [SLOW TEST:16.687 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":209,"skipped":3749,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:36:37.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Apr 8 00:36:37.398: INFO: >>> kubeConfig: /root/.kube/config Apr 8 00:36:40.284: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:36:49.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7353" for this suite. • [SLOW TEST:12.491 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":210,"skipped":3758,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:36:49.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Apr 8 00:36:53.887: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-8211 PodName:pod-sharedvolume-22a0795d-ef5c-43d1-a169-861c10302770 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 00:36:53.887: INFO: >>> kubeConfig: /root/.kube/config I0408 00:36:53.924793 7 log.go:172] (0xc006589970) (0xc0014cad20) Create stream I0408 00:36:53.924833 7 log.go:172] (0xc006589970) (0xc0014cad20) Stream added, broadcasting: 1 I0408 00:36:53.927480 7 log.go:172] (0xc006589970) Reply frame received for 1 I0408 00:36:53.927525 7 log.go:172] (0xc006589970) (0xc002a2a000) Create stream I0408 00:36:53.927544 7 log.go:172] (0xc006589970) (0xc002a2a000) Stream added, broadcasting: 3 I0408 00:36:53.928541 7 log.go:172] (0xc006589970) Reply frame received for 3 I0408 00:36:53.928575 7 log.go:172] (0xc006589970) (0xc0014cae60) Create stream I0408 00:36:53.928589 7 log.go:172] (0xc006589970) (0xc0014cae60) Stream added, broadcasting: 5 I0408 00:36:53.929718 7 log.go:172] (0xc006589970) Reply frame received for 5 I0408 00:36:53.979776 7 log.go:172] (0xc006589970) Data frame received for 3 I0408 00:36:53.979800 7 log.go:172] (0xc002a2a000) (3) Data frame handling I0408 00:36:53.979813 7 log.go:172] (0xc002a2a000) (3) Data frame sent I0408 00:36:53.979838 7 log.go:172] (0xc006589970) Data frame received for 5 I0408 00:36:53.979867 7 log.go:172] (0xc0014cae60) (5) Data frame handling I0408 00:36:53.979889 7 log.go:172] (0xc006589970) Data frame received for 3 I0408 00:36:53.979915 7 log.go:172] (0xc002a2a000) (3) Data frame handling I0408 00:36:53.981570 7 log.go:172] (0xc006589970) Data frame received for 1 I0408 00:36:53.981612 7 log.go:172] (0xc0014cad20) (1) Data frame handling I0408 00:36:53.981661 7 log.go:172] (0xc0014cad20) (1) Data frame sent I0408 00:36:53.981819 7 log.go:172] (0xc006589970) (0xc0014cad20) Stream removed, broadcasting: 1 I0408 00:36:53.981881 7 log.go:172] (0xc006589970) Go away received I0408 00:36:53.981964 7 log.go:172] (0xc006589970) (0xc0014cad20) Stream removed, broadcasting: 1 I0408 00:36:53.981991 7 log.go:172] (0xc006589970) (0xc002a2a000) Stream removed, broadcasting: 3 I0408 00:36:53.982008 7 log.go:172] (0xc006589970) (0xc0014cae60) Stream removed, broadcasting: 5 Apr 8 00:36:53.982: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:36:53.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8211" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":211,"skipped":3771,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:36:53.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:36:54.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9969" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":212,"skipped":3781,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:36:54.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:37:13.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1356" for this suite. • [SLOW TEST:19.236 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":213,"skipped":3807,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:37:13.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 8 00:37:13.409: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eac2f2a7-8c3f-47be-93fc-126eefe664a7" in namespace "downward-api-2217" to be "Succeeded or Failed" Apr 8 00:37:13.412: INFO: Pod "downwardapi-volume-eac2f2a7-8c3f-47be-93fc-126eefe664a7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.42457ms Apr 8 00:37:15.416: INFO: Pod "downwardapi-volume-eac2f2a7-8c3f-47be-93fc-126eefe664a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007189698s Apr 8 00:37:17.421: INFO: Pod "downwardapi-volume-eac2f2a7-8c3f-47be-93fc-126eefe664a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012043255s STEP: Saw pod success Apr 8 00:37:17.421: INFO: Pod "downwardapi-volume-eac2f2a7-8c3f-47be-93fc-126eefe664a7" satisfied condition "Succeeded or Failed" Apr 8 00:37:17.430: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-eac2f2a7-8c3f-47be-93fc-126eefe664a7 container client-container: STEP: delete the pod Apr 8 00:37:17.487: INFO: Waiting for pod downwardapi-volume-eac2f2a7-8c3f-47be-93fc-126eefe664a7 to disappear Apr 8 00:37:17.528: INFO: Pod downwardapi-volume-eac2f2a7-8c3f-47be-93fc-126eefe664a7 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:37:17.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2217" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":214,"skipped":3830,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:37:17.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 8 00:37:17.586: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 8 00:37:17.607: INFO: Waiting for terminating namespaces to be deleted... Apr 8 00:37:17.610: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 8 00:37:17.633: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 8 00:37:17.633: INFO: Container kube-proxy ready: true, restart count 0 Apr 8 00:37:17.633: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 8 00:37:17.633: INFO: Container kindnet-cni ready: true, restart count 0 Apr 8 00:37:17.633: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 8 00:37:17.649: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 8 00:37:17.649: INFO: Container kindnet-cni ready: true, restart count 0 Apr 8 00:37:17.649: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 8 00:37:17.649: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1603b11e58f6fbb4], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:37:18.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6717" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":275,"completed":215,"skipped":3884,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:37:18.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 8 00:37:18.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Apr 8 00:37:19.284: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-08T00:37:19Z generation:1 name:name1 resourceVersion:6285787 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:332d8880-dd63-4e66-9e7b-afa611f732df] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Apr 8 00:37:29.289: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-08T00:37:29Z generation:1 name:name2 resourceVersion:6285832 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:298d3de3-b0ba-4e6c-899a-4b1846fffcbd] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Apr 8 00:37:39.295: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-08T00:37:19Z generation:2 name:name1 resourceVersion:6285862 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:332d8880-dd63-4e66-9e7b-afa611f732df] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Apr 8 00:37:49.302: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-08T00:37:29Z generation:2 name:name2 resourceVersion:6285893 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:298d3de3-b0ba-4e6c-899a-4b1846fffcbd] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Apr 8 00:37:59.310: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-08T00:37:19Z generation:2 name:name1 resourceVersion:6285923 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:332d8880-dd63-4e66-9e7b-afa611f732df] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Apr 8 00:38:09.319: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-08T00:37:29Z generation:2 name:name2 resourceVersion:6285953 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:298d3de3-b0ba-4e6c-899a-4b1846fffcbd] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:38:19.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-62" for this suite. • [SLOW TEST:61.163 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":216,"skipped":3888,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:38:19.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Apr 8 00:38:19.879: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3216' Apr 8 00:38:20.204: INFO: stderr: "" Apr 8 00:38:20.204: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 8 00:38:20.204: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3216' Apr 8 00:38:20.301: INFO: stderr: "" Apr 8 00:38:20.301: INFO: stdout: "update-demo-nautilus-9gngd update-demo-nautilus-bbmjz " Apr 8 00:38:20.301: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9gngd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3216' Apr 8 00:38:20.394: INFO: stderr: "" Apr 8 00:38:20.394: INFO: stdout: "" Apr 8 00:38:20.394: INFO: update-demo-nautilus-9gngd is created but not running Apr 8 00:38:25.394: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3216' Apr 8 00:38:25.489: INFO: stderr: "" Apr 8 00:38:25.489: INFO: stdout: "update-demo-nautilus-9gngd update-demo-nautilus-bbmjz " Apr 8 00:38:25.490: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9gngd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3216' Apr 8 00:38:25.584: INFO: stderr: "" Apr 8 00:38:25.584: INFO: stdout: "true" Apr 8 00:38:25.584: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9gngd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3216' Apr 8 00:38:25.675: INFO: stderr: "" Apr 8 00:38:25.675: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 8 00:38:25.675: INFO: validating pod update-demo-nautilus-9gngd Apr 8 00:38:25.689: INFO: got data: { "image": "nautilus.jpg" } Apr 8 00:38:25.689: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 8 00:38:25.689: INFO: update-demo-nautilus-9gngd is verified up and running Apr 8 00:38:25.689: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bbmjz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3216' Apr 8 00:38:25.773: INFO: stderr: "" Apr 8 00:38:25.773: INFO: stdout: "true" Apr 8 00:38:25.773: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bbmjz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3216' Apr 8 00:38:25.871: INFO: stderr: "" Apr 8 00:38:25.871: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 8 00:38:25.871: INFO: validating pod update-demo-nautilus-bbmjz Apr 8 00:38:25.875: INFO: got data: { "image": "nautilus.jpg" } Apr 8 00:38:25.875: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 8 00:38:25.875: INFO: update-demo-nautilus-bbmjz is verified up and running STEP: scaling down the replication controller Apr 8 00:38:25.878: INFO: scanned /root for discovery docs: Apr 8 00:38:25.878: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-3216' Apr 8 00:38:26.998: INFO: stderr: "" Apr 8 00:38:26.998: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 8 00:38:26.998: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3216' Apr 8 00:38:27.125: INFO: stderr: "" Apr 8 00:38:27.125: INFO: stdout: "update-demo-nautilus-9gngd update-demo-nautilus-bbmjz " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 8 00:38:32.125: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3216' Apr 8 00:38:32.233: INFO: stderr: "" Apr 8 00:38:32.233: INFO: stdout: "update-demo-nautilus-9gngd update-demo-nautilus-bbmjz " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 8 00:38:37.233: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3216' Apr 8 00:38:37.327: INFO: stderr: "" Apr 8 00:38:37.327: INFO: stdout: "update-demo-nautilus-bbmjz " Apr 8 00:38:37.327: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bbmjz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3216' Apr 8 00:38:37.422: INFO: stderr: "" Apr 8 00:38:37.422: INFO: stdout: "true" Apr 8 00:38:37.423: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bbmjz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3216' Apr 8 00:38:37.512: INFO: stderr: "" Apr 8 00:38:37.512: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 8 00:38:37.512: INFO: validating pod update-demo-nautilus-bbmjz Apr 8 00:38:37.515: INFO: got data: { "image": "nautilus.jpg" } Apr 8 00:38:37.515: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 8 00:38:37.515: INFO: update-demo-nautilus-bbmjz is verified up and running STEP: scaling up the replication controller Apr 8 00:38:37.518: INFO: scanned /root for discovery docs: Apr 8 00:38:37.518: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-3216' Apr 8 00:38:38.638: INFO: stderr: "" Apr 8 00:38:38.638: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 8 00:38:38.638: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3216' Apr 8 00:38:38.735: INFO: stderr: "" Apr 8 00:38:38.736: INFO: stdout: "update-demo-nautilus-bbmjz update-demo-nautilus-rmwt4 " Apr 8 00:38:38.736: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bbmjz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3216' Apr 8 00:38:38.830: INFO: stderr: "" Apr 8 00:38:38.830: INFO: stdout: "true" Apr 8 00:38:38.830: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bbmjz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3216' Apr 8 00:38:38.917: INFO: stderr: "" Apr 8 00:38:38.917: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 8 00:38:38.917: INFO: validating pod update-demo-nautilus-bbmjz Apr 8 00:38:38.920: INFO: got data: { "image": "nautilus.jpg" } Apr 8 00:38:38.920: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 8 00:38:38.920: INFO: update-demo-nautilus-bbmjz is verified up and running Apr 8 00:38:38.920: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rmwt4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3216' Apr 8 00:38:39.012: INFO: stderr: "" Apr 8 00:38:39.012: INFO: stdout: "" Apr 8 00:38:39.012: INFO: update-demo-nautilus-rmwt4 is created but not running Apr 8 00:38:44.012: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3216' Apr 8 00:38:44.122: INFO: stderr: "" Apr 8 00:38:44.122: INFO: stdout: "update-demo-nautilus-bbmjz update-demo-nautilus-rmwt4 " Apr 8 00:38:44.122: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bbmjz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3216' Apr 8 00:38:44.215: INFO: stderr: "" Apr 8 00:38:44.215: INFO: stdout: "true" Apr 8 00:38:44.215: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bbmjz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3216' Apr 8 00:38:44.306: INFO: stderr: "" Apr 8 00:38:44.306: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 8 00:38:44.306: INFO: validating pod update-demo-nautilus-bbmjz Apr 8 00:38:44.310: INFO: got data: { "image": "nautilus.jpg" } Apr 8 00:38:44.310: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 8 00:38:44.310: INFO: update-demo-nautilus-bbmjz is verified up and running Apr 8 00:38:44.310: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rmwt4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3216' Apr 8 00:38:44.398: INFO: stderr: "" Apr 8 00:38:44.398: INFO: stdout: "true" Apr 8 00:38:44.398: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rmwt4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3216' Apr 8 00:38:44.495: INFO: stderr: "" Apr 8 00:38:44.495: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 8 00:38:44.495: INFO: validating pod update-demo-nautilus-rmwt4 Apr 8 00:38:44.500: INFO: got data: { "image": "nautilus.jpg" } Apr 8 00:38:44.500: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 8 00:38:44.500: INFO: update-demo-nautilus-rmwt4 is verified up and running STEP: using delete to clean up resources Apr 8 00:38:44.500: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3216' Apr 8 00:38:44.615: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 8 00:38:44.615: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 8 00:38:44.615: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3216' Apr 8 00:38:44.708: INFO: stderr: "No resources found in kubectl-3216 namespace.\n" Apr 8 00:38:44.708: INFO: stdout: "" Apr 8 00:38:44.708: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3216 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 8 00:38:44.808: INFO: stderr: "" Apr 8 00:38:44.808: INFO: stdout: "update-demo-nautilus-bbmjz\nupdate-demo-nautilus-rmwt4\n" Apr 8 00:38:45.309: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3216' Apr 8 00:38:45.406: INFO: stderr: "No resources found in kubectl-3216 namespace.\n" Apr 8 00:38:45.406: INFO: stdout: "" Apr 8 00:38:45.406: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3216 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 8 00:38:45.498: INFO: stderr: "" Apr 8 00:38:45.498: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:38:45.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3216" for this suite. • [SLOW TEST:25.665 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":275,"completed":217,"skipped":3890,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:38:45.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 8 00:38:50.239: INFO: Successfully updated pod "labelsupdate5b0d5157-af54-4d01-8e82-ef1d514d6512" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:38:52.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9513" for this suite. • [SLOW TEST:6.761 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":218,"skipped":3920,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:38:52.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 8 00:38:52.321: INFO: Waiting up to 5m0s for pod "downward-api-92942201-b0a7-4c6c-a22d-539fe957b015" in namespace "downward-api-2064" to be "Succeeded or Failed" Apr 8 00:38:52.352: INFO: Pod "downward-api-92942201-b0a7-4c6c-a22d-539fe957b015": Phase="Pending", Reason="", readiness=false. Elapsed: 30.333722ms Apr 8 00:38:54.355: INFO: Pod "downward-api-92942201-b0a7-4c6c-a22d-539fe957b015": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033625893s Apr 8 00:38:56.360: INFO: Pod "downward-api-92942201-b0a7-4c6c-a22d-539fe957b015": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038679521s STEP: Saw pod success Apr 8 00:38:56.360: INFO: Pod "downward-api-92942201-b0a7-4c6c-a22d-539fe957b015" satisfied condition "Succeeded or Failed" Apr 8 00:38:56.363: INFO: Trying to get logs from node latest-worker2 pod downward-api-92942201-b0a7-4c6c-a22d-539fe957b015 container dapi-container: STEP: delete the pod Apr 8 00:38:56.394: INFO: Waiting for pod downward-api-92942201-b0a7-4c6c-a22d-539fe957b015 to disappear Apr 8 00:38:56.396: INFO: Pod downward-api-92942201-b0a7-4c6c-a22d-539fe957b015 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:38:56.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2064" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":219,"skipped":3956,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:38:56.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-1258, will wait for the garbage collector to delete the pods Apr 8 00:39:00.541: INFO: Deleting Job.batch foo took: 6.003608ms Apr 8 00:39:00.841: INFO: Terminating Job.batch foo pods took: 300.218242ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:39:43.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1258" for this suite. • [SLOW TEST:46.649 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":220,"skipped":3966,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:39:43.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-a8946d16-7718-411f-b66a-ac741b3d2874 STEP: Creating a pod to test consume secrets Apr 8 00:39:43.115: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-37a574d6-29c8-4ae0-bcaa-ea7c0004e214" in namespace "projected-5953" to be "Succeeded or Failed" Apr 8 00:39:43.119: INFO: Pod "pod-projected-secrets-37a574d6-29c8-4ae0-bcaa-ea7c0004e214": Phase="Pending", Reason="", readiness=false. Elapsed: 3.711442ms Apr 8 00:39:45.122: INFO: Pod "pod-projected-secrets-37a574d6-29c8-4ae0-bcaa-ea7c0004e214": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007327437s Apr 8 00:39:47.127: INFO: Pod "pod-projected-secrets-37a574d6-29c8-4ae0-bcaa-ea7c0004e214": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011669994s STEP: Saw pod success Apr 8 00:39:47.127: INFO: Pod "pod-projected-secrets-37a574d6-29c8-4ae0-bcaa-ea7c0004e214" satisfied condition "Succeeded or Failed" Apr 8 00:39:47.130: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-37a574d6-29c8-4ae0-bcaa-ea7c0004e214 container projected-secret-volume-test: STEP: delete the pod Apr 8 00:39:47.175: INFO: Waiting for pod pod-projected-secrets-37a574d6-29c8-4ae0-bcaa-ea7c0004e214 to disappear Apr 8 00:39:47.185: INFO: Pod pod-projected-secrets-37a574d6-29c8-4ae0-bcaa-ea7c0004e214 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:39:47.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5953" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":221,"skipped":3993,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:39:47.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Apr 8 00:39:47.258: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the sample API server. Apr 8 00:39:47.700: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 8 00:39:49.769: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721903187, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721903187, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721903187, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721903187, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 8 00:39:52.328: INFO: Waited 547.926742ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:39:52.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-3631" for this suite. • [SLOW TEST:5.674 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":222,"skipped":4001,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:39:52.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 8 00:39:53.838: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 8 00:39:55.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721903193, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721903193, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721903193, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721903193, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 8 00:39:58.940: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 8 00:39:58.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2610-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:40:00.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8049" for this suite. STEP: Destroying namespace "webhook-8049-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.272 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":223,"skipped":4009,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:40:00.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 8 00:40:00.203: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d50f1aad-121b-48cf-9be9-1ce1a428d282" in namespace "downward-api-2624" to be "Succeeded or Failed" Apr 8 00:40:00.218: INFO: Pod "downwardapi-volume-d50f1aad-121b-48cf-9be9-1ce1a428d282": Phase="Pending", Reason="", readiness=false. Elapsed: 14.569012ms Apr 8 00:40:02.222: INFO: Pod "downwardapi-volume-d50f1aad-121b-48cf-9be9-1ce1a428d282": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019097533s Apr 8 00:40:04.226: INFO: Pod "downwardapi-volume-d50f1aad-121b-48cf-9be9-1ce1a428d282": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023247091s STEP: Saw pod success Apr 8 00:40:04.226: INFO: Pod "downwardapi-volume-d50f1aad-121b-48cf-9be9-1ce1a428d282" satisfied condition "Succeeded or Failed" Apr 8 00:40:04.229: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-d50f1aad-121b-48cf-9be9-1ce1a428d282 container client-container: STEP: delete the pod Apr 8 00:40:04.260: INFO: Waiting for pod downwardapi-volume-d50f1aad-121b-48cf-9be9-1ce1a428d282 to disappear Apr 8 00:40:04.272: INFO: Pod downwardapi-volume-d50f1aad-121b-48cf-9be9-1ce1a428d282 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:40:04.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2624" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":224,"skipped":4035,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:40:04.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Apr 8 00:40:04.346: INFO: Created pod &Pod{ObjectMeta:{dns-8956 dns-8956 /api/v1/namespaces/dns-8956/pods/dns-8956 0ac7556e-368c-49f4-bfc1-5e6bb3afede6 6286657 0 2020-04-08 00:40:04 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d9g59,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d9g59,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d9g59,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 00:40:04.362: INFO: The status of Pod dns-8956 is Pending, waiting for it to be Running (with Ready = true) Apr 8 00:40:06.366: INFO: The status of Pod dns-8956 is Pending, waiting for it to be Running (with Ready = true) Apr 8 00:40:08.365: INFO: The status of Pod dns-8956 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Apr 8 00:40:08.365: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-8956 PodName:dns-8956 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 00:40:08.365: INFO: >>> kubeConfig: /root/.kube/config I0408 00:40:08.396381 7 log.go:172] (0xc0024aabb0) (0xc001ae3680) Create stream I0408 00:40:08.396420 7 log.go:172] (0xc0024aabb0) (0xc001ae3680) Stream added, broadcasting: 1 I0408 00:40:08.398807 7 log.go:172] (0xc0024aabb0) Reply frame received for 1 I0408 00:40:08.398860 7 log.go:172] (0xc0024aabb0) (0xc0023ffea0) Create stream I0408 00:40:08.398881 7 log.go:172] (0xc0024aabb0) (0xc0023ffea0) Stream added, broadcasting: 3 I0408 00:40:08.399857 7 log.go:172] (0xc0024aabb0) Reply frame received for 3 I0408 00:40:08.399903 7 log.go:172] (0xc0024aabb0) (0xc001ae3720) Create stream I0408 00:40:08.399920 7 log.go:172] (0xc0024aabb0) (0xc001ae3720) Stream added, broadcasting: 5 I0408 00:40:08.400841 7 log.go:172] (0xc0024aabb0) Reply frame received for 5 I0408 00:40:08.483320 7 log.go:172] (0xc0024aabb0) Data frame received for 3 I0408 00:40:08.483345 7 log.go:172] (0xc0023ffea0) (3) Data frame handling I0408 00:40:08.483359 7 log.go:172] (0xc0023ffea0) (3) Data frame sent I0408 00:40:08.484095 7 log.go:172] (0xc0024aabb0) Data frame received for 5 I0408 00:40:08.484111 7 log.go:172] (0xc001ae3720) (5) Data frame handling I0408 00:40:08.484283 7 log.go:172] (0xc0024aabb0) Data frame received for 3 I0408 00:40:08.484325 7 log.go:172] (0xc0023ffea0) (3) Data frame handling I0408 00:40:08.485783 7 log.go:172] (0xc0024aabb0) Data frame received for 1 I0408 00:40:08.485805 7 log.go:172] (0xc001ae3680) (1) Data frame handling I0408 00:40:08.485819 7 log.go:172] (0xc001ae3680) (1) Data frame sent I0408 00:40:08.485839 7 log.go:172] (0xc0024aabb0) (0xc001ae3680) Stream removed, broadcasting: 1 I0408 00:40:08.485858 7 log.go:172] (0xc0024aabb0) Go away received I0408 00:40:08.485945 7 log.go:172] (0xc0024aabb0) (0xc001ae3680) Stream removed, broadcasting: 1 I0408 00:40:08.485965 7 log.go:172] (0xc0024aabb0) (0xc0023ffea0) Stream removed, broadcasting: 3 I0408 00:40:08.485979 7 log.go:172] (0xc0024aabb0) (0xc001ae3720) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Apr 8 00:40:08.486: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-8956 PodName:dns-8956 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 00:40:08.486: INFO: >>> kubeConfig: /root/.kube/config I0408 00:40:08.519627 7 log.go:172] (0xc0033e8420) (0xc000535c20) Create stream I0408 00:40:08.519665 7 log.go:172] (0xc0033e8420) (0xc000535c20) Stream added, broadcasting: 1 I0408 00:40:08.521562 7 log.go:172] (0xc0033e8420) Reply frame received for 1 I0408 00:40:08.521600 7 log.go:172] (0xc0033e8420) (0xc000d4fd60) Create stream I0408 00:40:08.521617 7 log.go:172] (0xc0033e8420) (0xc000d4fd60) Stream added, broadcasting: 3 I0408 00:40:08.522592 7 log.go:172] (0xc0033e8420) Reply frame received for 3 I0408 00:40:08.522641 7 log.go:172] (0xc0033e8420) (0xc000d4fea0) Create stream I0408 00:40:08.522665 7 log.go:172] (0xc0033e8420) (0xc000d4fea0) Stream added, broadcasting: 5 I0408 00:40:08.523462 7 log.go:172] (0xc0033e8420) Reply frame received for 5 I0408 00:40:08.585853 7 log.go:172] (0xc0033e8420) Data frame received for 3 I0408 00:40:08.585887 7 log.go:172] (0xc000d4fd60) (3) Data frame handling I0408 00:40:08.585910 7 log.go:172] (0xc000d4fd60) (3) Data frame sent I0408 00:40:08.587177 7 log.go:172] (0xc0033e8420) Data frame received for 5 I0408 00:40:08.587200 7 log.go:172] (0xc000d4fea0) (5) Data frame handling I0408 00:40:08.587519 7 log.go:172] (0xc0033e8420) Data frame received for 3 I0408 00:40:08.587536 7 log.go:172] (0xc000d4fd60) (3) Data frame handling I0408 00:40:08.589596 7 log.go:172] (0xc0033e8420) Data frame received for 1 I0408 00:40:08.589620 7 log.go:172] (0xc000535c20) (1) Data frame handling I0408 00:40:08.589637 7 log.go:172] (0xc000535c20) (1) Data frame sent I0408 00:40:08.589650 7 log.go:172] (0xc0033e8420) (0xc000535c20) Stream removed, broadcasting: 1 I0408 00:40:08.589720 7 log.go:172] (0xc0033e8420) Go away received I0408 00:40:08.589797 7 log.go:172] (0xc0033e8420) (0xc000535c20) Stream removed, broadcasting: 1 I0408 00:40:08.589852 7 log.go:172] (0xc0033e8420) (0xc000d4fd60) Stream removed, broadcasting: 3 I0408 00:40:08.589869 7 log.go:172] (0xc0033e8420) (0xc000d4fea0) Stream removed, broadcasting: 5 Apr 8 00:40:08.589: INFO: Deleting pod dns-8956... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:40:08.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8956" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":225,"skipped":4051,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:40:08.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0408 00:40:18.738047 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 8 00:40:18.738: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:40:18.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2703" for this suite. • [SLOW TEST:10.090 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":226,"skipped":4103,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:40:18.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service nodeport-service with the type=NodePort in namespace services-6176 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6176 STEP: creating replication controller externalsvc in namespace services-6176 I0408 00:40:19.045496 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-6176, replica count: 2 I0408 00:40:22.095982 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0408 00:40:25.096198 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Apr 8 00:40:25.142: INFO: Creating new exec pod Apr 8 00:40:29.157: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-6176 execpod59bb2 -- /bin/sh -x -c nslookup nodeport-service' Apr 8 00:40:29.404: INFO: stderr: "I0408 00:40:29.295364 2768 log.go:172] (0xc000ab7550) (0xc000974780) Create stream\nI0408 00:40:29.295428 2768 log.go:172] (0xc000ab7550) (0xc000974780) Stream added, broadcasting: 1\nI0408 00:40:29.300311 2768 log.go:172] (0xc000ab7550) Reply frame received for 1\nI0408 00:40:29.300361 2768 log.go:172] (0xc000ab7550) (0xc0007db7c0) Create stream\nI0408 00:40:29.300375 2768 log.go:172] (0xc000ab7550) (0xc0007db7c0) Stream added, broadcasting: 3\nI0408 00:40:29.301652 2768 log.go:172] (0xc000ab7550) Reply frame received for 3\nI0408 00:40:29.301705 2768 log.go:172] (0xc000ab7550) (0xc00063ebe0) Create stream\nI0408 00:40:29.301720 2768 log.go:172] (0xc000ab7550) (0xc00063ebe0) Stream added, broadcasting: 5\nI0408 00:40:29.302831 2768 log.go:172] (0xc000ab7550) Reply frame received for 5\nI0408 00:40:29.385692 2768 log.go:172] (0xc000ab7550) Data frame received for 5\nI0408 00:40:29.385735 2768 log.go:172] (0xc00063ebe0) (5) Data frame handling\nI0408 00:40:29.385770 2768 log.go:172] (0xc00063ebe0) (5) Data frame sent\n+ nslookup nodeport-service\nI0408 00:40:29.395510 2768 log.go:172] (0xc000ab7550) Data frame received for 3\nI0408 00:40:29.395528 2768 log.go:172] (0xc0007db7c0) (3) Data frame handling\nI0408 00:40:29.395546 2768 log.go:172] (0xc0007db7c0) (3) Data frame sent\nI0408 00:40:29.396634 2768 log.go:172] (0xc000ab7550) Data frame received for 3\nI0408 00:40:29.396651 2768 log.go:172] (0xc0007db7c0) (3) Data frame handling\nI0408 00:40:29.396672 2768 log.go:172] (0xc0007db7c0) (3) Data frame sent\nI0408 00:40:29.397341 2768 log.go:172] (0xc000ab7550) Data frame received for 3\nI0408 00:40:29.397374 2768 log.go:172] (0xc0007db7c0) (3) Data frame handling\nI0408 00:40:29.397563 2768 log.go:172] (0xc000ab7550) Data frame received for 5\nI0408 00:40:29.397592 2768 log.go:172] (0xc00063ebe0) (5) Data frame handling\nI0408 00:40:29.399457 2768 log.go:172] (0xc000ab7550) Data frame received for 1\nI0408 00:40:29.399483 2768 log.go:172] (0xc000974780) (1) Data frame handling\nI0408 00:40:29.399497 2768 log.go:172] (0xc000974780) (1) Data frame sent\nI0408 00:40:29.399513 2768 log.go:172] (0xc000ab7550) (0xc000974780) Stream removed, broadcasting: 1\nI0408 00:40:29.399529 2768 log.go:172] (0xc000ab7550) Go away received\nI0408 00:40:29.400090 2768 log.go:172] (0xc000ab7550) (0xc000974780) Stream removed, broadcasting: 1\nI0408 00:40:29.400125 2768 log.go:172] (0xc000ab7550) (0xc0007db7c0) Stream removed, broadcasting: 3\nI0408 00:40:29.400140 2768 log.go:172] (0xc000ab7550) (0xc00063ebe0) Stream removed, broadcasting: 5\n" Apr 8 00:40:29.404: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-6176.svc.cluster.local\tcanonical name = externalsvc.services-6176.svc.cluster.local.\nName:\texternalsvc.services-6176.svc.cluster.local\nAddress: 10.96.110.19\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6176, will wait for the garbage collector to delete the pods Apr 8 00:40:29.465: INFO: Deleting ReplicationController externalsvc took: 7.310505ms Apr 8 00:40:29.765: INFO: Terminating ReplicationController externalsvc pods took: 300.255136ms Apr 8 00:40:43.168: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:40:43.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6176" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:24.485 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":227,"skipped":4111,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:40:43.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with configMap that has name projected-configmap-test-upd-d8046600-5349-443a-9fe4-d32625e56dc7 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-d8046600-5349-443a-9fe4-d32625e56dc7 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:40:49.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6658" for this suite. • [SLOW TEST:6.138 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":228,"skipped":4144,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:40:49.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-c38117a2-81e1-45e6-85da-fff231b7016f STEP: Creating a pod to test consume configMaps Apr 8 00:40:49.449: INFO: Waiting up to 5m0s for pod "pod-configmaps-d90bb6e2-db15-465b-a72f-f4da5c14b0db" in namespace "configmap-3030" to be "Succeeded or Failed" Apr 8 00:40:49.453: INFO: Pod "pod-configmaps-d90bb6e2-db15-465b-a72f-f4da5c14b0db": Phase="Pending", Reason="", readiness=false. Elapsed: 3.632137ms Apr 8 00:40:51.479: INFO: Pod "pod-configmaps-d90bb6e2-db15-465b-a72f-f4da5c14b0db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030026248s Apr 8 00:40:53.483: INFO: Pod "pod-configmaps-d90bb6e2-db15-465b-a72f-f4da5c14b0db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033781943s STEP: Saw pod success Apr 8 00:40:53.483: INFO: Pod "pod-configmaps-d90bb6e2-db15-465b-a72f-f4da5c14b0db" satisfied condition "Succeeded or Failed" Apr 8 00:40:53.486: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-d90bb6e2-db15-465b-a72f-f4da5c14b0db container configmap-volume-test: STEP: delete the pod Apr 8 00:40:53.512: INFO: Waiting for pod pod-configmaps-d90bb6e2-db15-465b-a72f-f4da5c14b0db to disappear Apr 8 00:40:53.524: INFO: Pod pod-configmaps-d90bb6e2-db15-465b-a72f-f4da5c14b0db no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:40:53.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3030" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":229,"skipped":4146,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:40:53.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating pod Apr 8 00:40:57.658: INFO: Pod pod-hostip-de9a8ad2-688f-4b89-b482-ea1170d128ba has hostIP: 172.17.0.12 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:40:57.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2818" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":230,"skipped":4154,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:40:57.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-245.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-245.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 8 00:41:03.798: INFO: DNS probes using dns-245/dns-test-90f511ec-b49c-4bb0-b9b7-95611125b5a5 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:41:03.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-245" for this suite. • [SLOW TEST:6.195 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":275,"completed":231,"skipped":4162,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:41:03.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-cd1f261d-1af0-4dae-a558-5e9214015c4c STEP: Creating a pod to test consume secrets Apr 8 00:41:03.928: INFO: Waiting up to 5m0s for pod "pod-secrets-c85e3442-0e6d-45e2-b43c-829ac0ba47de" in namespace "secrets-7798" to be "Succeeded or Failed" Apr 8 00:41:04.019: INFO: Pod "pod-secrets-c85e3442-0e6d-45e2-b43c-829ac0ba47de": Phase="Pending", Reason="", readiness=false. Elapsed: 90.669562ms Apr 8 00:41:06.030: INFO: Pod "pod-secrets-c85e3442-0e6d-45e2-b43c-829ac0ba47de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102160449s Apr 8 00:41:08.034: INFO: Pod "pod-secrets-c85e3442-0e6d-45e2-b43c-829ac0ba47de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.106333777s STEP: Saw pod success Apr 8 00:41:08.034: INFO: Pod "pod-secrets-c85e3442-0e6d-45e2-b43c-829ac0ba47de" satisfied condition "Succeeded or Failed" Apr 8 00:41:08.036: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-c85e3442-0e6d-45e2-b43c-829ac0ba47de container secret-volume-test: STEP: delete the pod Apr 8 00:41:08.054: INFO: Waiting for pod pod-secrets-c85e3442-0e6d-45e2-b43c-829ac0ba47de to disappear Apr 8 00:41:08.058: INFO: Pod pod-secrets-c85e3442-0e6d-45e2-b43c-829ac0ba47de no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:41:08.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7798" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":232,"skipped":4196,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:41:08.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0408 00:41:48.201725 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 8 00:41:48.201: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:41:48.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4998" for this suite. • [SLOW TEST:40.138 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":233,"skipped":4197,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:41:48.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 8 00:41:48.275: INFO: Waiting up to 5m0s for pod "downwardapi-volume-35e4ef9a-a425-40a7-bf35-a1ba952b1cde" in namespace "projected-4845" to be "Succeeded or Failed" Apr 8 00:41:48.292: INFO: Pod "downwardapi-volume-35e4ef9a-a425-40a7-bf35-a1ba952b1cde": Phase="Pending", Reason="", readiness=false. Elapsed: 16.917316ms Apr 8 00:41:50.296: INFO: Pod "downwardapi-volume-35e4ef9a-a425-40a7-bf35-a1ba952b1cde": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021331605s Apr 8 00:41:52.300: INFO: Pod "downwardapi-volume-35e4ef9a-a425-40a7-bf35-a1ba952b1cde": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024833395s STEP: Saw pod success Apr 8 00:41:52.300: INFO: Pod "downwardapi-volume-35e4ef9a-a425-40a7-bf35-a1ba952b1cde" satisfied condition "Succeeded or Failed" Apr 8 00:41:52.302: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-35e4ef9a-a425-40a7-bf35-a1ba952b1cde container client-container: STEP: delete the pod Apr 8 00:41:52.316: INFO: Waiting for pod downwardapi-volume-35e4ef9a-a425-40a7-bf35-a1ba952b1cde to disappear Apr 8 00:41:52.321: INFO: Pod downwardapi-volume-35e4ef9a-a425-40a7-bf35-a1ba952b1cde no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:41:52.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4845" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":234,"skipped":4233,"failed":0} S ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:41:52.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-b61b7dc7-343a-460a-8a87-b0d865e09ddb STEP: Creating secret with name s-test-opt-upd-91df313b-dd4e-4266-a00b-6b684495863e STEP: Creating the pod STEP: Deleting secret s-test-opt-del-b61b7dc7-343a-460a-8a87-b0d865e09ddb STEP: Updating secret s-test-opt-upd-91df313b-dd4e-4266-a00b-6b684495863e STEP: Creating secret with name s-test-opt-create-8bedbbb4-d634-4c2b-bad0-681fe926c56d STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:43:10.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5815" for this suite. • [SLOW TEST:78.547 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":235,"skipped":4234,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:43:10.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-f304190c-f713-4b0e-9a44-e2077e79f514 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:43:15.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5649" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":236,"skipped":4265,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:43:15.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating all guestbook components Apr 8 00:43:15.105: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Apr 8 00:43:15.105: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3934' Apr 8 00:43:15.441: INFO: stderr: "" Apr 8 00:43:15.441: INFO: stdout: "service/agnhost-slave created\n" Apr 8 00:43:15.442: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Apr 8 00:43:15.442: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3934' Apr 8 00:43:15.724: INFO: stderr: "" Apr 8 00:43:15.724: INFO: stdout: "service/agnhost-master created\n" Apr 8 00:43:15.724: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 8 00:43:15.724: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3934' Apr 8 00:43:16.017: INFO: stderr: "" Apr 8 00:43:16.017: INFO: stdout: "service/frontend created\n" Apr 8 00:43:16.017: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Apr 8 00:43:16.017: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3934' Apr 8 00:43:16.346: INFO: stderr: "" Apr 8 00:43:16.346: INFO: stdout: "deployment.apps/frontend created\n" Apr 8 00:43:16.346: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 8 00:43:16.346: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3934' Apr 8 00:43:16.792: INFO: stderr: "" Apr 8 00:43:16.792: INFO: stdout: "deployment.apps/agnhost-master created\n" Apr 8 00:43:16.792: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 8 00:43:16.792: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3934' Apr 8 00:43:17.122: INFO: stderr: "" Apr 8 00:43:17.122: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Apr 8 00:43:17.122: INFO: Waiting for all frontend pods to be Running. Apr 8 00:43:27.173: INFO: Waiting for frontend to serve content. Apr 8 00:43:27.184: INFO: Trying to add a new entry to the guestbook. Apr 8 00:43:27.193: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Apr 8 00:43:27.202: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3934' Apr 8 00:43:27.348: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 8 00:43:27.348: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Apr 8 00:43:27.348: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3934' Apr 8 00:43:27.480: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 8 00:43:27.480: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 8 00:43:27.480: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3934' Apr 8 00:43:27.591: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 8 00:43:27.591: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 8 00:43:27.591: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3934' Apr 8 00:43:27.699: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 8 00:43:27.699: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 8 00:43:27.700: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3934' Apr 8 00:43:27.802: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 8 00:43:27.802: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 8 00:43:27.802: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3934' Apr 8 00:43:27.916: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 8 00:43:27.916: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:43:27.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3934" for this suite. • [SLOW TEST:12.866 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":275,"completed":237,"skipped":4278,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:43:27.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:43:35.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-304" for this suite. • [SLOW TEST:7.210 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":238,"skipped":4289,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:43:35.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 8 00:43:35.223: INFO: Waiting up to 5m0s for pod "downwardapi-volume-972d4002-563d-4eba-9db3-a0e09d4ab2a5" in namespace "projected-7264" to be "Succeeded or Failed" Apr 8 00:43:35.246: INFO: Pod "downwardapi-volume-972d4002-563d-4eba-9db3-a0e09d4ab2a5": Phase="Pending", Reason="", readiness=false. Elapsed: 23.468024ms Apr 8 00:43:37.251: INFO: Pod "downwardapi-volume-972d4002-563d-4eba-9db3-a0e09d4ab2a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027762228s Apr 8 00:43:39.254: INFO: Pod "downwardapi-volume-972d4002-563d-4eba-9db3-a0e09d4ab2a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031241051s STEP: Saw pod success Apr 8 00:43:39.254: INFO: Pod "downwardapi-volume-972d4002-563d-4eba-9db3-a0e09d4ab2a5" satisfied condition "Succeeded or Failed" Apr 8 00:43:39.257: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-972d4002-563d-4eba-9db3-a0e09d4ab2a5 container client-container: STEP: delete the pod Apr 8 00:43:39.271: INFO: Waiting for pod downwardapi-volume-972d4002-563d-4eba-9db3-a0e09d4ab2a5 to disappear Apr 8 00:43:39.276: INFO: Pod downwardapi-volume-972d4002-563d-4eba-9db3-a0e09d4ab2a5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:43:39.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7264" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":239,"skipped":4306,"failed":0} ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:43:39.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 8 00:43:43.911: INFO: Successfully updated pod "pod-update-activedeadlineseconds-5da60baf-fe74-41f4-bdfc-0b03addd7905" Apr 8 00:43:43.911: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-5da60baf-fe74-41f4-bdfc-0b03addd7905" in namespace "pods-3060" to be "terminated due to deadline exceeded" Apr 8 00:43:43.917: INFO: Pod "pod-update-activedeadlineseconds-5da60baf-fe74-41f4-bdfc-0b03addd7905": Phase="Running", Reason="", readiness=true. Elapsed: 6.344105ms Apr 8 00:43:45.943: INFO: Pod "pod-update-activedeadlineseconds-5da60baf-fe74-41f4-bdfc-0b03addd7905": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.031536362s Apr 8 00:43:45.943: INFO: Pod "pod-update-activedeadlineseconds-5da60baf-fe74-41f4-bdfc-0b03addd7905" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:43:45.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3060" for this suite. • [SLOW TEST:6.653 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":240,"skipped":4306,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:43:45.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 8 00:43:46.731: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 8 00:43:48.757: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721903426, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721903426, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721903426, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721903426, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 8 00:43:51.806: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:43:52.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2952" for this suite. STEP: Destroying namespace "webhook-2952-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.405 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":241,"skipped":4306,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:43:52.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override arguments Apr 8 00:43:52.447: INFO: Waiting up to 5m0s for pod "client-containers-6a37b1ac-f7c2-4cc5-86ba-3e362914b52a" in namespace "containers-8488" to be "Succeeded or Failed" Apr 8 00:43:52.451: INFO: Pod "client-containers-6a37b1ac-f7c2-4cc5-86ba-3e362914b52a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.735198ms Apr 8 00:43:54.455: INFO: Pod "client-containers-6a37b1ac-f7c2-4cc5-86ba-3e362914b52a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008194977s Apr 8 00:43:56.459: INFO: Pod "client-containers-6a37b1ac-f7c2-4cc5-86ba-3e362914b52a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012396255s STEP: Saw pod success Apr 8 00:43:56.459: INFO: Pod "client-containers-6a37b1ac-f7c2-4cc5-86ba-3e362914b52a" satisfied condition "Succeeded or Failed" Apr 8 00:43:56.462: INFO: Trying to get logs from node latest-worker pod client-containers-6a37b1ac-f7c2-4cc5-86ba-3e362914b52a container test-container: STEP: delete the pod Apr 8 00:43:56.495: INFO: Waiting for pod client-containers-6a37b1ac-f7c2-4cc5-86ba-3e362914b52a to disappear Apr 8 00:43:56.499: INFO: Pod client-containers-6a37b1ac-f7c2-4cc5-86ba-3e362914b52a no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:43:56.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8488" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":242,"skipped":4316,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:43:56.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Apr 8 00:44:01.603: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:44:01.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7095" for this suite. • [SLOW TEST:5.190 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":243,"skipped":4321,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:44:01.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-315dab07-4160-46dd-9fda-6fc59fd0e802 STEP: Creating a pod to test consume configMaps Apr 8 00:44:01.780: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-edf615b3-e140-4b32-88e8-4a55817e030a" in namespace "projected-6" to be "Succeeded or Failed" Apr 8 00:44:01.810: INFO: Pod "pod-projected-configmaps-edf615b3-e140-4b32-88e8-4a55817e030a": Phase="Pending", Reason="", readiness=false. Elapsed: 30.299608ms Apr 8 00:44:03.949: INFO: Pod "pod-projected-configmaps-edf615b3-e140-4b32-88e8-4a55817e030a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.169545869s Apr 8 00:44:05.953: INFO: Pod "pod-projected-configmaps-edf615b3-e140-4b32-88e8-4a55817e030a": Phase="Running", Reason="", readiness=true. Elapsed: 4.17341443s Apr 8 00:44:07.957: INFO: Pod "pod-projected-configmaps-edf615b3-e140-4b32-88e8-4a55817e030a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.17706719s STEP: Saw pod success Apr 8 00:44:07.957: INFO: Pod "pod-projected-configmaps-edf615b3-e140-4b32-88e8-4a55817e030a" satisfied condition "Succeeded or Failed" Apr 8 00:44:07.959: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-edf615b3-e140-4b32-88e8-4a55817e030a container projected-configmap-volume-test: STEP: delete the pod Apr 8 00:44:07.991: INFO: Waiting for pod pod-projected-configmaps-edf615b3-e140-4b32-88e8-4a55817e030a to disappear Apr 8 00:44:08.050: INFO: Pod pod-projected-configmaps-edf615b3-e140-4b32-88e8-4a55817e030a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:44:08.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6" for this suite. • [SLOW TEST:6.450 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":244,"skipped":4321,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:44:08.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test env composition Apr 8 00:44:08.237: INFO: Waiting up to 5m0s for pod "var-expansion-3616cf40-966f-4944-b355-746b0b861acf" in namespace "var-expansion-7267" to be "Succeeded or Failed" Apr 8 00:44:08.242: INFO: Pod "var-expansion-3616cf40-966f-4944-b355-746b0b861acf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061195ms Apr 8 00:44:10.327: INFO: Pod "var-expansion-3616cf40-966f-4944-b355-746b0b861acf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089129344s Apr 8 00:44:12.329: INFO: Pod "var-expansion-3616cf40-966f-4944-b355-746b0b861acf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.091769985s STEP: Saw pod success Apr 8 00:44:12.329: INFO: Pod "var-expansion-3616cf40-966f-4944-b355-746b0b861acf" satisfied condition "Succeeded or Failed" Apr 8 00:44:12.332: INFO: Trying to get logs from node latest-worker2 pod var-expansion-3616cf40-966f-4944-b355-746b0b861acf container dapi-container: STEP: delete the pod Apr 8 00:44:12.345: INFO: Waiting for pod var-expansion-3616cf40-966f-4944-b355-746b0b861acf to disappear Apr 8 00:44:12.362: INFO: Pod var-expansion-3616cf40-966f-4944-b355-746b0b861acf no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:44:12.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7267" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":245,"skipped":4334,"failed":0} SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:44:12.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 8 00:44:12.417: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:44:18.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9286" for this suite. • [SLOW TEST:5.974 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":246,"skipped":4336,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:44:18.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 8 00:44:18.775: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 8 00:44:20.829: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721903458, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721903458, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721903458, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721903458, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 8 00:44:23.871: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 8 00:44:23.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3115-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:44:25.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3474" for this suite. STEP: Destroying namespace "webhook-3474-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.739 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":247,"skipped":4340,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:44:25.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 8 00:44:29.698: INFO: Successfully updated pod "labelsupdate7ae659ee-4fcc-4849-8db3-3c0cd00ddb84" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:44:31.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4622" for this suite. • [SLOW TEST:6.654 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":248,"skipped":4351,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:44:31.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 8 00:44:31.780: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:44:32.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1632" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":275,"completed":249,"skipped":4365,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:44:32.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-1692 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating stateful set ss in namespace statefulset-1692 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1692 Apr 8 00:44:32.518: INFO: Found 0 stateful pods, waiting for 1 Apr 8 00:44:42.523: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 8 00:44:42.526: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1692 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 8 00:44:42.767: INFO: stderr: "I0408 00:44:42.652451 3044 log.go:172] (0xc000ad88f0) (0xc0006d3540) Create stream\nI0408 00:44:42.652520 3044 log.go:172] (0xc000ad88f0) (0xc0006d3540) Stream added, broadcasting: 1\nI0408 00:44:42.655480 3044 log.go:172] (0xc000ad88f0) Reply frame received for 1\nI0408 00:44:42.655533 3044 log.go:172] (0xc000ad88f0) (0xc000625540) Create stream\nI0408 00:44:42.655547 3044 log.go:172] (0xc000ad88f0) (0xc000625540) Stream added, broadcasting: 3\nI0408 00:44:42.656725 3044 log.go:172] (0xc000ad88f0) Reply frame received for 3\nI0408 00:44:42.656764 3044 log.go:172] (0xc000ad88f0) (0xc0006d35e0) Create stream\nI0408 00:44:42.656775 3044 log.go:172] (0xc000ad88f0) (0xc0006d35e0) Stream added, broadcasting: 5\nI0408 00:44:42.657908 3044 log.go:172] (0xc000ad88f0) Reply frame received for 5\nI0408 00:44:42.724077 3044 log.go:172] (0xc000ad88f0) Data frame received for 5\nI0408 00:44:42.724134 3044 log.go:172] (0xc0006d35e0) (5) Data frame handling\nI0408 00:44:42.724171 3044 log.go:172] (0xc0006d35e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0408 00:44:42.760075 3044 log.go:172] (0xc000ad88f0) Data frame received for 3\nI0408 00:44:42.760093 3044 log.go:172] (0xc000625540) (3) Data frame handling\nI0408 00:44:42.760106 3044 log.go:172] (0xc000625540) (3) Data frame sent\nI0408 00:44:42.760398 3044 log.go:172] (0xc000ad88f0) Data frame received for 5\nI0408 00:44:42.760432 3044 log.go:172] (0xc0006d35e0) (5) Data frame handling\nI0408 00:44:42.760503 3044 log.go:172] (0xc000ad88f0) Data frame received for 3\nI0408 00:44:42.760544 3044 log.go:172] (0xc000625540) (3) Data frame handling\nI0408 00:44:42.762823 3044 log.go:172] (0xc000ad88f0) Data frame received for 1\nI0408 00:44:42.762862 3044 log.go:172] (0xc0006d3540) (1) Data frame handling\nI0408 00:44:42.762901 3044 log.go:172] (0xc0006d3540) (1) Data frame sent\nI0408 00:44:42.762926 3044 log.go:172] (0xc000ad88f0) (0xc0006d3540) Stream removed, broadcasting: 1\nI0408 00:44:42.762990 3044 log.go:172] (0xc000ad88f0) Go away received\nI0408 00:44:42.763211 3044 log.go:172] (0xc000ad88f0) (0xc0006d3540) Stream removed, broadcasting: 1\nI0408 00:44:42.763224 3044 log.go:172] (0xc000ad88f0) (0xc000625540) Stream removed, broadcasting: 3\nI0408 00:44:42.763230 3044 log.go:172] (0xc000ad88f0) (0xc0006d35e0) Stream removed, broadcasting: 5\n" Apr 8 00:44:42.768: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 8 00:44:42.768: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 8 00:44:42.771: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 8 00:44:52.776: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 8 00:44:52.776: INFO: Waiting for statefulset status.replicas updated to 0 Apr 8 00:44:52.795: INFO: POD NODE PHASE GRACE CONDITIONS Apr 8 00:44:52.795: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:32 +0000 UTC }] Apr 8 00:44:52.795: INFO: Apr 8 00:44:52.795: INFO: StatefulSet ss has not reached scale 3, at 1 Apr 8 00:44:53.800: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990973705s Apr 8 00:44:54.805: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.985815014s Apr 8 00:44:55.810: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.980702694s Apr 8 00:44:56.814: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.975489976s Apr 8 00:44:57.819: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.971343841s Apr 8 00:44:58.824: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.966565267s Apr 8 00:44:59.829: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.961718553s Apr 8 00:45:00.834: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.957149096s Apr 8 00:45:01.838: INFO: Verifying statefulset ss doesn't scale past 3 for another 951.993454ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1692 Apr 8 00:45:02.843: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1692 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 00:45:03.067: INFO: stderr: "I0408 00:45:02.971646 3065 log.go:172] (0xc000afe840) (0xc0006ed4a0) Create stream\nI0408 00:45:02.971697 3065 log.go:172] (0xc000afe840) (0xc0006ed4a0) Stream added, broadcasting: 1\nI0408 00:45:02.974034 3065 log.go:172] (0xc000afe840) Reply frame received for 1\nI0408 00:45:02.974075 3065 log.go:172] (0xc000afe840) (0xc000a12000) Create stream\nI0408 00:45:02.974093 3065 log.go:172] (0xc000afe840) (0xc000a12000) Stream added, broadcasting: 3\nI0408 00:45:02.975073 3065 log.go:172] (0xc000afe840) Reply frame received for 3\nI0408 00:45:02.975122 3065 log.go:172] (0xc000afe840) (0xc0006ed540) Create stream\nI0408 00:45:02.975138 3065 log.go:172] (0xc000afe840) (0xc0006ed540) Stream added, broadcasting: 5\nI0408 00:45:02.976168 3065 log.go:172] (0xc000afe840) Reply frame received for 5\nI0408 00:45:03.059812 3065 log.go:172] (0xc000afe840) Data frame received for 3\nI0408 00:45:03.059849 3065 log.go:172] (0xc000a12000) (3) Data frame handling\nI0408 00:45:03.059864 3065 log.go:172] (0xc000a12000) (3) Data frame sent\nI0408 00:45:03.059874 3065 log.go:172] (0xc000afe840) Data frame received for 3\nI0408 00:45:03.059883 3065 log.go:172] (0xc000a12000) (3) Data frame handling\nI0408 00:45:03.059991 3065 log.go:172] (0xc000afe840) Data frame received for 5\nI0408 00:45:03.060027 3065 log.go:172] (0xc0006ed540) (5) Data frame handling\nI0408 00:45:03.060059 3065 log.go:172] (0xc0006ed540) (5) Data frame sent\nI0408 00:45:03.060079 3065 log.go:172] (0xc000afe840) Data frame received for 5\nI0408 00:45:03.060099 3065 log.go:172] (0xc0006ed540) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0408 00:45:03.062249 3065 log.go:172] (0xc000afe840) Data frame received for 1\nI0408 00:45:03.062273 3065 log.go:172] (0xc0006ed4a0) (1) Data frame handling\nI0408 00:45:03.062285 3065 log.go:172] (0xc0006ed4a0) (1) Data frame sent\nI0408 00:45:03.062306 3065 log.go:172] (0xc000afe840) (0xc0006ed4a0) Stream removed, broadcasting: 1\nI0408 00:45:03.062335 3065 log.go:172] (0xc000afe840) Go away received\nI0408 00:45:03.062838 3065 log.go:172] (0xc000afe840) (0xc0006ed4a0) Stream removed, broadcasting: 1\nI0408 00:45:03.062871 3065 log.go:172] (0xc000afe840) (0xc000a12000) Stream removed, broadcasting: 3\nI0408 00:45:03.062890 3065 log.go:172] (0xc000afe840) (0xc0006ed540) Stream removed, broadcasting: 5\n" Apr 8 00:45:03.067: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 8 00:45:03.067: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 8 00:45:03.067: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1692 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 00:45:03.269: INFO: stderr: "I0408 00:45:03.193265 3086 log.go:172] (0xc000ad98c0) (0xc000a64820) Create stream\nI0408 00:45:03.193323 3086 log.go:172] (0xc000ad98c0) (0xc000a64820) Stream added, broadcasting: 1\nI0408 00:45:03.198122 3086 log.go:172] (0xc000ad98c0) Reply frame received for 1\nI0408 00:45:03.198197 3086 log.go:172] (0xc000ad98c0) (0xc00060b680) Create stream\nI0408 00:45:03.198227 3086 log.go:172] (0xc000ad98c0) (0xc00060b680) Stream added, broadcasting: 3\nI0408 00:45:03.199264 3086 log.go:172] (0xc000ad98c0) Reply frame received for 3\nI0408 00:45:03.199301 3086 log.go:172] (0xc000ad98c0) (0xc0004bcaa0) Create stream\nI0408 00:45:03.199312 3086 log.go:172] (0xc000ad98c0) (0xc0004bcaa0) Stream added, broadcasting: 5\nI0408 00:45:03.200314 3086 log.go:172] (0xc000ad98c0) Reply frame received for 5\nI0408 00:45:03.262478 3086 log.go:172] (0xc000ad98c0) Data frame received for 3\nI0408 00:45:03.262509 3086 log.go:172] (0xc00060b680) (3) Data frame handling\nI0408 00:45:03.262523 3086 log.go:172] (0xc00060b680) (3) Data frame sent\nI0408 00:45:03.262560 3086 log.go:172] (0xc000ad98c0) Data frame received for 5\nI0408 00:45:03.262597 3086 log.go:172] (0xc0004bcaa0) (5) Data frame handling\nI0408 00:45:03.262621 3086 log.go:172] (0xc0004bcaa0) (5) Data frame sent\nI0408 00:45:03.262635 3086 log.go:172] (0xc000ad98c0) Data frame received for 5\nI0408 00:45:03.262647 3086 log.go:172] (0xc0004bcaa0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0408 00:45:03.262754 3086 log.go:172] (0xc000ad98c0) Data frame received for 3\nI0408 00:45:03.262776 3086 log.go:172] (0xc00060b680) (3) Data frame handling\nI0408 00:45:03.264245 3086 log.go:172] (0xc000ad98c0) Data frame received for 1\nI0408 00:45:03.264262 3086 log.go:172] (0xc000a64820) (1) Data frame handling\nI0408 00:45:03.264273 3086 log.go:172] (0xc000a64820) (1) Data frame sent\nI0408 00:45:03.264289 3086 log.go:172] (0xc000ad98c0) (0xc000a64820) Stream removed, broadcasting: 1\nI0408 00:45:03.264305 3086 log.go:172] (0xc000ad98c0) Go away received\nI0408 00:45:03.264629 3086 log.go:172] (0xc000ad98c0) (0xc000a64820) Stream removed, broadcasting: 1\nI0408 00:45:03.264649 3086 log.go:172] (0xc000ad98c0) (0xc00060b680) Stream removed, broadcasting: 3\nI0408 00:45:03.264660 3086 log.go:172] (0xc000ad98c0) (0xc0004bcaa0) Stream removed, broadcasting: 5\n" Apr 8 00:45:03.269: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 8 00:45:03.269: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 8 00:45:03.269: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1692 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 00:45:03.490: INFO: stderr: "I0408 00:45:03.407433 3106 log.go:172] (0xc0009e6840) (0xc0008d6000) Create stream\nI0408 00:45:03.407490 3106 log.go:172] (0xc0009e6840) (0xc0008d6000) Stream added, broadcasting: 1\nI0408 00:45:03.410768 3106 log.go:172] (0xc0009e6840) Reply frame received for 1\nI0408 00:45:03.410820 3106 log.go:172] (0xc0009e6840) (0xc0007f1400) Create stream\nI0408 00:45:03.410837 3106 log.go:172] (0xc0009e6840) (0xc0007f1400) Stream added, broadcasting: 3\nI0408 00:45:03.411900 3106 log.go:172] (0xc0009e6840) Reply frame received for 3\nI0408 00:45:03.411956 3106 log.go:172] (0xc0009e6840) (0xc0008d60a0) Create stream\nI0408 00:45:03.411990 3106 log.go:172] (0xc0009e6840) (0xc0008d60a0) Stream added, broadcasting: 5\nI0408 00:45:03.413082 3106 log.go:172] (0xc0009e6840) Reply frame received for 5\nI0408 00:45:03.483682 3106 log.go:172] (0xc0009e6840) Data frame received for 3\nI0408 00:45:03.483723 3106 log.go:172] (0xc0007f1400) (3) Data frame handling\nI0408 00:45:03.483732 3106 log.go:172] (0xc0007f1400) (3) Data frame sent\nI0408 00:45:03.483740 3106 log.go:172] (0xc0009e6840) Data frame received for 3\nI0408 00:45:03.483745 3106 log.go:172] (0xc0007f1400) (3) Data frame handling\nI0408 00:45:03.483775 3106 log.go:172] (0xc0009e6840) Data frame received for 5\nI0408 00:45:03.483793 3106 log.go:172] (0xc0008d60a0) (5) Data frame handling\nI0408 00:45:03.483818 3106 log.go:172] (0xc0008d60a0) (5) Data frame sent\nI0408 00:45:03.483831 3106 log.go:172] (0xc0009e6840) Data frame received for 5\nI0408 00:45:03.483837 3106 log.go:172] (0xc0008d60a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0408 00:45:03.484993 3106 log.go:172] (0xc0009e6840) Data frame received for 1\nI0408 00:45:03.485030 3106 log.go:172] (0xc0008d6000) (1) Data frame handling\nI0408 00:45:03.485062 3106 log.go:172] (0xc0008d6000) (1) Data frame sent\nI0408 00:45:03.485083 3106 log.go:172] (0xc0009e6840) (0xc0008d6000) Stream removed, broadcasting: 1\nI0408 00:45:03.485228 3106 log.go:172] (0xc0009e6840) Go away received\nI0408 00:45:03.485598 3106 log.go:172] (0xc0009e6840) (0xc0008d6000) Stream removed, broadcasting: 1\nI0408 00:45:03.485622 3106 log.go:172] (0xc0009e6840) (0xc0007f1400) Stream removed, broadcasting: 3\nI0408 00:45:03.485634 3106 log.go:172] (0xc0009e6840) (0xc0008d60a0) Stream removed, broadcasting: 5\n" Apr 8 00:45:03.490: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 8 00:45:03.490: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 8 00:45:03.494: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 8 00:45:03.494: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 8 00:45:03.494: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Apr 8 00:45:03.497: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1692 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 8 00:45:03.688: INFO: stderr: "I0408 00:45:03.621587 3128 log.go:172] (0xc0003e0a50) (0xc0009d41e0) Create stream\nI0408 00:45:03.621650 3128 log.go:172] (0xc0003e0a50) (0xc0009d41e0) Stream added, broadcasting: 1\nI0408 00:45:03.624101 3128 log.go:172] (0xc0003e0a50) Reply frame received for 1\nI0408 00:45:03.624152 3128 log.go:172] (0xc0003e0a50) (0xc000922000) Create stream\nI0408 00:45:03.624173 3128 log.go:172] (0xc0003e0a50) (0xc000922000) Stream added, broadcasting: 3\nI0408 00:45:03.625268 3128 log.go:172] (0xc0003e0a50) Reply frame received for 3\nI0408 00:45:03.625325 3128 log.go:172] (0xc0003e0a50) (0xc0009d4280) Create stream\nI0408 00:45:03.625341 3128 log.go:172] (0xc0003e0a50) (0xc0009d4280) Stream added, broadcasting: 5\nI0408 00:45:03.626355 3128 log.go:172] (0xc0003e0a50) Reply frame received for 5\nI0408 00:45:03.682954 3128 log.go:172] (0xc0003e0a50) Data frame received for 3\nI0408 00:45:03.682996 3128 log.go:172] (0xc000922000) (3) Data frame handling\nI0408 00:45:03.683009 3128 log.go:172] (0xc000922000) (3) Data frame sent\nI0408 00:45:03.683016 3128 log.go:172] (0xc0003e0a50) Data frame received for 3\nI0408 00:45:03.683023 3128 log.go:172] (0xc000922000) (3) Data frame handling\nI0408 00:45:03.683049 3128 log.go:172] (0xc0003e0a50) Data frame received for 5\nI0408 00:45:03.683056 3128 log.go:172] (0xc0009d4280) (5) Data frame handling\nI0408 00:45:03.683069 3128 log.go:172] (0xc0009d4280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0408 00:45:03.683379 3128 log.go:172] (0xc0003e0a50) Data frame received for 5\nI0408 00:45:03.683408 3128 log.go:172] (0xc0009d4280) (5) Data frame handling\nI0408 00:45:03.684839 3128 log.go:172] (0xc0003e0a50) Data frame received for 1\nI0408 00:45:03.684857 3128 log.go:172] (0xc0009d41e0) (1) Data frame handling\nI0408 00:45:03.684869 3128 log.go:172] (0xc0009d41e0) (1) Data frame sent\nI0408 00:45:03.684879 3128 log.go:172] (0xc0003e0a50) (0xc0009d41e0) Stream removed, broadcasting: 1\nI0408 00:45:03.684888 3128 log.go:172] (0xc0003e0a50) Go away received\nI0408 00:45:03.685338 3128 log.go:172] (0xc0003e0a50) (0xc0009d41e0) Stream removed, broadcasting: 1\nI0408 00:45:03.685355 3128 log.go:172] (0xc0003e0a50) (0xc000922000) Stream removed, broadcasting: 3\nI0408 00:45:03.685364 3128 log.go:172] (0xc0003e0a50) (0xc0009d4280) Stream removed, broadcasting: 5\n" Apr 8 00:45:03.688: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 8 00:45:03.688: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 8 00:45:03.688: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1692 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 8 00:45:03.932: INFO: stderr: "I0408 00:45:03.817666 3151 log.go:172] (0xc000a4b6b0) (0xc000a286e0) Create stream\nI0408 00:45:03.817732 3151 log.go:172] (0xc000a4b6b0) (0xc000a286e0) Stream added, broadcasting: 1\nI0408 00:45:03.822423 3151 log.go:172] (0xc000a4b6b0) Reply frame received for 1\nI0408 00:45:03.822467 3151 log.go:172] (0xc000a4b6b0) (0xc0006175e0) Create stream\nI0408 00:45:03.822483 3151 log.go:172] (0xc000a4b6b0) (0xc0006175e0) Stream added, broadcasting: 3\nI0408 00:45:03.823557 3151 log.go:172] (0xc000a4b6b0) Reply frame received for 3\nI0408 00:45:03.823605 3151 log.go:172] (0xc000a4b6b0) (0xc000534a00) Create stream\nI0408 00:45:03.823619 3151 log.go:172] (0xc000a4b6b0) (0xc000534a00) Stream added, broadcasting: 5\nI0408 00:45:03.824548 3151 log.go:172] (0xc000a4b6b0) Reply frame received for 5\nI0408 00:45:03.887542 3151 log.go:172] (0xc000a4b6b0) Data frame received for 5\nI0408 00:45:03.887573 3151 log.go:172] (0xc000534a00) (5) Data frame handling\nI0408 00:45:03.887593 3151 log.go:172] (0xc000534a00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0408 00:45:03.923909 3151 log.go:172] (0xc000a4b6b0) Data frame received for 3\nI0408 00:45:03.923938 3151 log.go:172] (0xc0006175e0) (3) Data frame handling\nI0408 00:45:03.923966 3151 log.go:172] (0xc0006175e0) (3) Data frame sent\nI0408 00:45:03.924059 3151 log.go:172] (0xc000a4b6b0) Data frame received for 3\nI0408 00:45:03.924083 3151 log.go:172] (0xc0006175e0) (3) Data frame handling\nI0408 00:45:03.924136 3151 log.go:172] (0xc000a4b6b0) Data frame received for 5\nI0408 00:45:03.924164 3151 log.go:172] (0xc000534a00) (5) Data frame handling\nI0408 00:45:03.926651 3151 log.go:172] (0xc000a4b6b0) Data frame received for 1\nI0408 00:45:03.926696 3151 log.go:172] (0xc000a286e0) (1) Data frame handling\nI0408 00:45:03.926739 3151 log.go:172] (0xc000a286e0) (1) Data frame sent\nI0408 00:45:03.926766 3151 log.go:172] (0xc000a4b6b0) (0xc000a286e0) Stream removed, broadcasting: 1\nI0408 00:45:03.926785 3151 log.go:172] (0xc000a4b6b0) Go away received\nI0408 00:45:03.927241 3151 log.go:172] (0xc000a4b6b0) (0xc000a286e0) Stream removed, broadcasting: 1\nI0408 00:45:03.927265 3151 log.go:172] (0xc000a4b6b0) (0xc0006175e0) Stream removed, broadcasting: 3\nI0408 00:45:03.927276 3151 log.go:172] (0xc000a4b6b0) (0xc000534a00) Stream removed, broadcasting: 5\n" Apr 8 00:45:03.932: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 8 00:45:03.932: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 8 00:45:03.932: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1692 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 8 00:45:04.165: INFO: stderr: "I0408 00:45:04.061866 3171 log.go:172] (0xc0008c2a50) (0xc00044a0a0) Create stream\nI0408 00:45:04.061916 3171 log.go:172] (0xc0008c2a50) (0xc00044a0a0) Stream added, broadcasting: 1\nI0408 00:45:04.064537 3171 log.go:172] (0xc0008c2a50) Reply frame received for 1\nI0408 00:45:04.064584 3171 log.go:172] (0xc0008c2a50) (0xc0006bd0e0) Create stream\nI0408 00:45:04.064597 3171 log.go:172] (0xc0008c2a50) (0xc0006bd0e0) Stream added, broadcasting: 3\nI0408 00:45:04.065615 3171 log.go:172] (0xc0008c2a50) Reply frame received for 3\nI0408 00:45:04.065648 3171 log.go:172] (0xc0008c2a50) (0xc00044a1e0) Create stream\nI0408 00:45:04.065660 3171 log.go:172] (0xc0008c2a50) (0xc00044a1e0) Stream added, broadcasting: 5\nI0408 00:45:04.066444 3171 log.go:172] (0xc0008c2a50) Reply frame received for 5\nI0408 00:45:04.120365 3171 log.go:172] (0xc0008c2a50) Data frame received for 5\nI0408 00:45:04.120440 3171 log.go:172] (0xc00044a1e0) (5) Data frame handling\nI0408 00:45:04.120465 3171 log.go:172] (0xc00044a1e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0408 00:45:04.158036 3171 log.go:172] (0xc0008c2a50) Data frame received for 3\nI0408 00:45:04.158089 3171 log.go:172] (0xc0006bd0e0) (3) Data frame handling\nI0408 00:45:04.158127 3171 log.go:172] (0xc0006bd0e0) (3) Data frame sent\nI0408 00:45:04.158143 3171 log.go:172] (0xc0008c2a50) Data frame received for 3\nI0408 00:45:04.158153 3171 log.go:172] (0xc0006bd0e0) (3) Data frame handling\nI0408 00:45:04.158230 3171 log.go:172] (0xc0008c2a50) Data frame received for 5\nI0408 00:45:04.158253 3171 log.go:172] (0xc00044a1e0) (5) Data frame handling\nI0408 00:45:04.159804 3171 log.go:172] (0xc0008c2a50) Data frame received for 1\nI0408 00:45:04.159829 3171 log.go:172] (0xc00044a0a0) (1) Data frame handling\nI0408 00:45:04.159840 3171 log.go:172] (0xc00044a0a0) (1) Data frame sent\nI0408 00:45:04.159851 3171 log.go:172] (0xc0008c2a50) (0xc00044a0a0) Stream removed, broadcasting: 1\nI0408 00:45:04.159867 3171 log.go:172] (0xc0008c2a50) Go away received\nI0408 00:45:04.160329 3171 log.go:172] (0xc0008c2a50) (0xc00044a0a0) Stream removed, broadcasting: 1\nI0408 00:45:04.160353 3171 log.go:172] (0xc0008c2a50) (0xc0006bd0e0) Stream removed, broadcasting: 3\nI0408 00:45:04.160366 3171 log.go:172] (0xc0008c2a50) (0xc00044a1e0) Stream removed, broadcasting: 5\n" Apr 8 00:45:04.165: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 8 00:45:04.165: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 8 00:45:04.165: INFO: Waiting for statefulset status.replicas updated to 0 Apr 8 00:45:04.168: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 8 00:45:14.178: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 8 00:45:14.178: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 8 00:45:14.178: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 8 00:45:14.194: INFO: POD NODE PHASE GRACE CONDITIONS Apr 8 00:45:14.194: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:32 +0000 UTC }] Apr 8 00:45:14.194: INFO: ss-1 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:52 +0000 UTC }] Apr 8 00:45:14.194: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:52 +0000 UTC }] Apr 8 00:45:14.194: INFO: Apr 8 00:45:14.194: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 8 00:45:15.199: INFO: POD NODE PHASE GRACE CONDITIONS Apr 8 00:45:15.199: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:32 +0000 UTC }] Apr 8 00:45:15.199: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:52 +0000 UTC }] Apr 8 00:45:15.199: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:52 +0000 UTC }] Apr 8 00:45:15.199: INFO: Apr 8 00:45:15.199: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 8 00:45:16.205: INFO: POD NODE PHASE GRACE CONDITIONS Apr 8 00:45:16.205: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:32 +0000 UTC }] Apr 8 00:45:16.205: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:52 +0000 UTC }] Apr 8 00:45:16.205: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:52 +0000 UTC }] Apr 8 00:45:16.205: INFO: Apr 8 00:45:16.205: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 8 00:45:17.209: INFO: POD NODE PHASE GRACE CONDITIONS Apr 8 00:45:17.209: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:32 +0000 UTC }] Apr 8 00:45:17.210: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:52 +0000 UTC }] Apr 8 00:45:17.210: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:52 +0000 UTC }] Apr 8 00:45:17.210: INFO: Apr 8 00:45:17.210: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 8 00:45:18.214: INFO: POD NODE PHASE GRACE CONDITIONS Apr 8 00:45:18.214: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:32 +0000 UTC }] Apr 8 00:45:18.214: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:52 +0000 UTC }] Apr 8 00:45:18.214: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:52 +0000 UTC }] Apr 8 00:45:18.214: INFO: Apr 8 00:45:18.214: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 8 00:45:19.219: INFO: POD NODE PHASE GRACE CONDITIONS Apr 8 00:45:19.219: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:32 +0000 UTC }] Apr 8 00:45:19.219: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:52 +0000 UTC }] Apr 8 00:45:19.219: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:52 +0000 UTC }] Apr 8 00:45:19.219: INFO: Apr 8 00:45:19.219: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 8 00:45:20.224: INFO: POD NODE PHASE GRACE CONDITIONS Apr 8 00:45:20.224: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:32 +0000 UTC }] Apr 8 00:45:20.224: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:52 +0000 UTC }] Apr 8 00:45:20.224: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:52 +0000 UTC }] Apr 8 00:45:20.224: INFO: Apr 8 00:45:20.224: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 8 00:45:21.247: INFO: POD NODE PHASE GRACE CONDITIONS Apr 8 00:45:21.247: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:32 +0000 UTC }] Apr 8 00:45:21.247: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:52 +0000 UTC }] Apr 8 00:45:21.247: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:52 +0000 UTC }] Apr 8 00:45:21.247: INFO: Apr 8 00:45:21.247: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 8 00:45:22.252: INFO: POD NODE PHASE GRACE CONDITIONS Apr 8 00:45:22.252: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:32 +0000 UTC }] Apr 8 00:45:22.252: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:52 +0000 UTC }] Apr 8 00:45:22.252: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:45:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 00:44:52 +0000 UTC }] Apr 8 00:45:22.252: INFO: Apr 8 00:45:22.252: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 8 00:45:23.256: INFO: Verifying statefulset ss doesn't scale past 0 for another 931.970894ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1692 Apr 8 00:45:24.260: INFO: Scaling statefulset ss to 0 Apr 8 00:45:24.268: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 8 00:45:24.271: INFO: Deleting all statefulset in ns statefulset-1692 Apr 8 00:45:24.273: INFO: Scaling statefulset ss to 0 Apr 8 00:45:24.280: INFO: Waiting for statefulset status.replicas updated to 0 Apr 8 00:45:24.283: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:45:24.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1692" for this suite. • [SLOW TEST:51.912 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":250,"skipped":4404,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:45:24.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 8 00:45:24.390: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3fae4e62-4305-477a-8757-fd3d65aac238" in namespace "projected-5385" to be "Succeeded or Failed" Apr 8 00:45:24.409: INFO: Pod "downwardapi-volume-3fae4e62-4305-477a-8757-fd3d65aac238": Phase="Pending", Reason="", readiness=false. Elapsed: 19.768853ms Apr 8 00:45:26.413: INFO: Pod "downwardapi-volume-3fae4e62-4305-477a-8757-fd3d65aac238": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023466772s Apr 8 00:45:28.423: INFO: Pod "downwardapi-volume-3fae4e62-4305-477a-8757-fd3d65aac238": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033411852s STEP: Saw pod success Apr 8 00:45:28.423: INFO: Pod "downwardapi-volume-3fae4e62-4305-477a-8757-fd3d65aac238" satisfied condition "Succeeded or Failed" Apr 8 00:45:28.426: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-3fae4e62-4305-477a-8757-fd3d65aac238 container client-container: STEP: delete the pod Apr 8 00:45:28.442: INFO: Waiting for pod downwardapi-volume-3fae4e62-4305-477a-8757-fd3d65aac238 to disappear Apr 8 00:45:28.447: INFO: Pod downwardapi-volume-3fae4e62-4305-477a-8757-fd3d65aac238 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:45:28.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5385" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":251,"skipped":4423,"failed":0} S ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:45:28.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-378a6769-c7a2-4342-911a-88068870ab0e in namespace container-probe-5807 Apr 8 00:45:32.574: INFO: Started pod liveness-378a6769-c7a2-4342-911a-88068870ab0e in namespace container-probe-5807 STEP: checking the pod's current state and verifying that restartCount is present Apr 8 00:45:32.577: INFO: Initial restart count of pod liveness-378a6769-c7a2-4342-911a-88068870ab0e is 0 Apr 8 00:45:44.604: INFO: Restart count of pod container-probe-5807/liveness-378a6769-c7a2-4342-911a-88068870ab0e is now 1 (12.026958001s elapsed) Apr 8 00:46:04.646: INFO: Restart count of pod container-probe-5807/liveness-378a6769-c7a2-4342-911a-88068870ab0e is now 2 (32.068504131s elapsed) Apr 8 00:46:24.688: INFO: Restart count of pod container-probe-5807/liveness-378a6769-c7a2-4342-911a-88068870ab0e is now 3 (52.110454657s elapsed) Apr 8 00:46:46.763: INFO: Restart count of pod container-probe-5807/liveness-378a6769-c7a2-4342-911a-88068870ab0e is now 4 (1m14.185990065s elapsed) Apr 8 00:47:52.959: INFO: Restart count of pod container-probe-5807/liveness-378a6769-c7a2-4342-911a-88068870ab0e is now 5 (2m20.381417916s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:47:52.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5807" for this suite. • [SLOW TEST:144.594 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":252,"skipped":4424,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:47:53.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 8 00:47:53.309: INFO: Waiting up to 5m0s for pod "busybox-user-65534-d40ced6a-b1b2-470e-b52f-6ac12197f1ee" in namespace "security-context-test-3596" to be "Succeeded or Failed" Apr 8 00:47:53.329: INFO: Pod "busybox-user-65534-d40ced6a-b1b2-470e-b52f-6ac12197f1ee": Phase="Pending", Reason="", readiness=false. Elapsed: 19.729202ms Apr 8 00:47:55.333: INFO: Pod "busybox-user-65534-d40ced6a-b1b2-470e-b52f-6ac12197f1ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023854254s Apr 8 00:47:57.337: INFO: Pod "busybox-user-65534-d40ced6a-b1b2-470e-b52f-6ac12197f1ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028382024s Apr 8 00:47:57.338: INFO: Pod "busybox-user-65534-d40ced6a-b1b2-470e-b52f-6ac12197f1ee" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:47:57.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3596" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":253,"skipped":4436,"failed":0} ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:47:57.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating secret secrets-2679/secret-test-69133332-3837-464c-86fc-5e9b86cfdd96 STEP: Creating a pod to test consume secrets Apr 8 00:47:57.431: INFO: Waiting up to 5m0s for pod "pod-configmaps-544fa6e1-f456-43ae-ad6f-dec69ca7a5b0" in namespace "secrets-2679" to be "Succeeded or Failed" Apr 8 00:47:57.433: INFO: Pod "pod-configmaps-544fa6e1-f456-43ae-ad6f-dec69ca7a5b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.464479ms Apr 8 00:47:59.444: INFO: Pod "pod-configmaps-544fa6e1-f456-43ae-ad6f-dec69ca7a5b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013444011s Apr 8 00:48:01.448: INFO: Pod "pod-configmaps-544fa6e1-f456-43ae-ad6f-dec69ca7a5b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016939595s STEP: Saw pod success Apr 8 00:48:01.448: INFO: Pod "pod-configmaps-544fa6e1-f456-43ae-ad6f-dec69ca7a5b0" satisfied condition "Succeeded or Failed" Apr 8 00:48:01.450: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-544fa6e1-f456-43ae-ad6f-dec69ca7a5b0 container env-test: STEP: delete the pod Apr 8 00:48:01.481: INFO: Waiting for pod pod-configmaps-544fa6e1-f456-43ae-ad6f-dec69ca7a5b0 to disappear Apr 8 00:48:01.486: INFO: Pod pod-configmaps-544fa6e1-f456-43ae-ad6f-dec69ca7a5b0 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:48:01.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2679" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":254,"skipped":4436,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:48:01.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 8 00:48:02.153: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 8 00:48:04.162: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721903682, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721903682, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721903682, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721903682, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 8 00:48:07.190: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:48:07.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2785" for this suite. STEP: Destroying namespace "webhook-2785-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.967 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":255,"skipped":4438,"failed":0} S ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:48:07.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-782g5 in namespace proxy-970 I0408 00:48:07.577385 7 runners.go:190] Created replication controller with name: proxy-service-782g5, namespace: proxy-970, replica count: 1 I0408 00:48:08.627811 7 runners.go:190] proxy-service-782g5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0408 00:48:09.628060 7 runners.go:190] proxy-service-782g5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0408 00:48:10.628303 7 runners.go:190] proxy-service-782g5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0408 00:48:11.628518 7 runners.go:190] proxy-service-782g5 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0408 00:48:12.628766 7 runners.go:190] proxy-service-782g5 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0408 00:48:13.628974 7 runners.go:190] proxy-service-782g5 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0408 00:48:14.629275 7 runners.go:190] proxy-service-782g5 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 8 00:48:14.632: INFO: setup took 7.113027301s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 8 00:48:14.636: INFO: (0) /api/v1/namespaces/proxy-970/pods/http:proxy-service-782g5-crxd9:1080/proxy/: t... (200; 4.0228ms) Apr 8 00:48:14.638: INFO: (0) /api/v1/namespaces/proxy-970/pods/http:proxy-service-782g5-crxd9:160/proxy/: foo (200; 6.034439ms) Apr 8 00:48:14.638: INFO: (0) /api/v1/namespaces/proxy-970/services/http:proxy-service-782g5:portname2/proxy/: bar (200; 6.164833ms) Apr 8 00:48:14.638: INFO: (0) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9:162/proxy/: bar (200; 5.849511ms) Apr 8 00:48:14.638: INFO: (0) /api/v1/namespaces/proxy-970/pods/http:proxy-service-782g5-crxd9:162/proxy/: bar (200; 6.028363ms) Apr 8 00:48:14.639: INFO: (0) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9:160/proxy/: foo (200; 6.766877ms) Apr 8 00:48:14.640: INFO: (0) /api/v1/namespaces/proxy-970/services/http:proxy-service-782g5:portname1/proxy/: foo (200; 7.856686ms) Apr 8 00:48:14.642: INFO: (0) /api/v1/namespaces/proxy-970/services/proxy-service-782g5:portname2/proxy/: bar (200; 9.74297ms) Apr 8 00:48:14.642: INFO: (0) /api/v1/namespaces/proxy-970/services/proxy-service-782g5:portname1/proxy/: foo (200; 10.045257ms) Apr 8 00:48:14.642: INFO: (0) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9:1080/proxy/: testtest (200; 10.502257ms) Apr 8 00:48:14.646: INFO: (0) /api/v1/namespaces/proxy-970/pods/https:proxy-service-782g5-crxd9:462/proxy/: tls qux (200; 14.17354ms) Apr 8 00:48:14.647: INFO: (0) /api/v1/namespaces/proxy-970/pods/https:proxy-service-782g5-crxd9:460/proxy/: tls baz (200; 14.672809ms) Apr 8 00:48:14.647: INFO: (0) /api/v1/namespaces/proxy-970/pods/https:proxy-service-782g5-crxd9:443/proxy/: t... (200; 15.952857ms) Apr 8 00:48:14.672: INFO: (1) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9/proxy/: test (200; 15.982838ms) Apr 8 00:48:14.672: INFO: (1) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9:162/proxy/: bar (200; 15.957326ms) Apr 8 00:48:14.672: INFO: (1) /api/v1/namespaces/proxy-970/pods/http:proxy-service-782g5-crxd9:160/proxy/: foo (200; 15.906926ms) Apr 8 00:48:14.672: INFO: (1) /api/v1/namespaces/proxy-970/pods/http:proxy-service-782g5-crxd9:162/proxy/: bar (200; 16.246562ms) Apr 8 00:48:14.672: INFO: (1) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9:160/proxy/: foo (200; 16.340702ms) Apr 8 00:48:14.673: INFO: (1) /api/v1/namespaces/proxy-970/pods/https:proxy-service-782g5-crxd9:462/proxy/: tls qux (200; 16.837835ms) Apr 8 00:48:14.673: INFO: (1) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9:1080/proxy/: testtest (200; 4.909684ms) Apr 8 00:48:14.680: INFO: (2) /api/v1/namespaces/proxy-970/pods/https:proxy-service-782g5-crxd9:462/proxy/: tls qux (200; 5.03766ms) Apr 8 00:48:14.681: INFO: (2) /api/v1/namespaces/proxy-970/pods/https:proxy-service-782g5-crxd9:460/proxy/: tls baz (200; 5.704531ms) Apr 8 00:48:14.681: INFO: (2) /api/v1/namespaces/proxy-970/pods/https:proxy-service-782g5-crxd9:443/proxy/: testt... (200; 6.670455ms) Apr 8 00:48:14.682: INFO: (2) /api/v1/namespaces/proxy-970/services/http:proxy-service-782g5:portname2/proxy/: bar (200; 7.087617ms) Apr 8 00:48:14.683: INFO: (2) /api/v1/namespaces/proxy-970/services/http:proxy-service-782g5:portname1/proxy/: foo (200; 7.455906ms) Apr 8 00:48:14.683: INFO: (2) /api/v1/namespaces/proxy-970/pods/http:proxy-service-782g5-crxd9:160/proxy/: foo (200; 7.382621ms) Apr 8 00:48:14.683: INFO: (2) /api/v1/namespaces/proxy-970/services/proxy-service-782g5:portname1/proxy/: foo (200; 7.435562ms) Apr 8 00:48:14.683: INFO: (2) /api/v1/namespaces/proxy-970/services/https:proxy-service-782g5:tlsportname2/proxy/: tls qux (200; 7.76169ms) Apr 8 00:48:14.686: INFO: (3) /api/v1/namespaces/proxy-970/pods/http:proxy-service-782g5-crxd9:162/proxy/: bar (200; 2.704356ms) Apr 8 00:48:14.687: INFO: (3) /api/v1/namespaces/proxy-970/pods/https:proxy-service-782g5-crxd9:460/proxy/: tls baz (200; 3.411723ms) Apr 8 00:48:14.687: INFO: (3) /api/v1/namespaces/proxy-970/pods/http:proxy-service-782g5-crxd9:1080/proxy/: t... (200; 3.608379ms) Apr 8 00:48:14.687: INFO: (3) /api/v1/namespaces/proxy-970/services/http:proxy-service-782g5:portname2/proxy/: bar (200; 3.736856ms) Apr 8 00:48:14.687: INFO: (3) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9:162/proxy/: bar (200; 3.624177ms) Apr 8 00:48:14.687: INFO: (3) /api/v1/namespaces/proxy-970/pods/https:proxy-service-782g5-crxd9:443/proxy/: test (200; 3.892552ms) Apr 8 00:48:14.687: INFO: (3) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9:160/proxy/: foo (200; 3.983051ms) Apr 8 00:48:14.687: INFO: (3) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9:1080/proxy/: testtest (200; 3.524661ms) Apr 8 00:48:14.692: INFO: (4) /api/v1/namespaces/proxy-970/pods/http:proxy-service-782g5-crxd9:162/proxy/: bar (200; 3.604649ms) Apr 8 00:48:14.692: INFO: (4) /api/v1/namespaces/proxy-970/pods/http:proxy-service-782g5-crxd9:1080/proxy/: t... (200; 3.601693ms) Apr 8 00:48:14.692: INFO: (4) /api/v1/namespaces/proxy-970/pods/https:proxy-service-782g5-crxd9:443/proxy/: testt... (200; 3.874414ms) Apr 8 00:48:14.699: INFO: (5) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9:1080/proxy/: testtest (200; 4.37961ms) Apr 8 00:48:14.699: INFO: (5) /api/v1/namespaces/proxy-970/pods/http:proxy-service-782g5-crxd9:162/proxy/: bar (200; 4.292427ms) Apr 8 00:48:14.699: INFO: (5) /api/v1/namespaces/proxy-970/pods/https:proxy-service-782g5-crxd9:460/proxy/: tls baz (200; 4.719014ms) Apr 8 00:48:14.699: INFO: (5) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9:160/proxy/: foo (200; 4.545046ms) Apr 8 00:48:14.699: INFO: (5) /api/v1/namespaces/proxy-970/services/proxy-service-782g5:portname2/proxy/: bar (200; 4.478907ms) Apr 8 00:48:14.699: INFO: (5) /api/v1/namespaces/proxy-970/services/https:proxy-service-782g5:tlsportname1/proxy/: tls baz (200; 4.651788ms) Apr 8 00:48:14.700: INFO: (5) /api/v1/namespaces/proxy-970/services/proxy-service-782g5:portname1/proxy/: foo (200; 5.151779ms) Apr 8 00:48:14.700: INFO: (5) /api/v1/namespaces/proxy-970/services/http:proxy-service-782g5:portname1/proxy/: foo (200; 5.059435ms) Apr 8 00:48:14.700: INFO: (5) /api/v1/namespaces/proxy-970/services/http:proxy-service-782g5:portname2/proxy/: bar (200; 5.254177ms) Apr 8 00:48:14.700: INFO: (5) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9:162/proxy/: bar (200; 6.008702ms) Apr 8 00:48:14.700: INFO: (5) /api/v1/namespaces/proxy-970/services/https:proxy-service-782g5:tlsportname2/proxy/: tls qux (200; 5.443415ms) Apr 8 00:48:14.703: INFO: (6) /api/v1/namespaces/proxy-970/pods/https:proxy-service-782g5-crxd9:462/proxy/: tls qux (200; 2.664708ms) Apr 8 00:48:14.703: INFO: (6) /api/v1/namespaces/proxy-970/pods/https:proxy-service-782g5-crxd9:460/proxy/: tls baz (200; 2.619863ms) Apr 8 00:48:14.703: INFO: (6) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9/proxy/: test (200; 2.985559ms) Apr 8 00:48:14.703: INFO: (6) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9:1080/proxy/: testt... (200; 9.598699ms) Apr 8 00:48:14.710: INFO: (6) /api/v1/namespaces/proxy-970/services/https:proxy-service-782g5:tlsportname1/proxy/: tls baz (200; 9.53278ms) Apr 8 00:48:14.710: INFO: (6) /api/v1/namespaces/proxy-970/services/https:proxy-service-782g5:tlsportname2/proxy/: tls qux (200; 9.630409ms) Apr 8 00:48:14.710: INFO: (6) /api/v1/namespaces/proxy-970/services/http:proxy-service-782g5:portname2/proxy/: bar (200; 9.582623ms) Apr 8 00:48:14.710: INFO: (6) /api/v1/namespaces/proxy-970/services/http:proxy-service-782g5:portname1/proxy/: foo (200; 9.917211ms) Apr 8 00:48:14.710: INFO: (6) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9:162/proxy/: bar (200; 9.892504ms) Apr 8 00:48:14.710: INFO: (6) /api/v1/namespaces/proxy-970/services/proxy-service-782g5:portname1/proxy/: foo (200; 9.965726ms) Apr 8 00:48:14.710: INFO: (6) /api/v1/namespaces/proxy-970/pods/https:proxy-service-782g5-crxd9:443/proxy/: testt... (200; 5.821904ms) Apr 8 00:48:14.716: INFO: (7) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9:160/proxy/: foo (200; 5.921407ms) Apr 8 00:48:14.716: INFO: (7) /api/v1/namespaces/proxy-970/pods/http:proxy-service-782g5-crxd9:160/proxy/: foo (200; 5.965844ms) Apr 8 00:48:14.716: INFO: (7) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9/proxy/: test (200; 5.937759ms) Apr 8 00:48:14.716: INFO: (7) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9:162/proxy/: bar (200; 5.984446ms) Apr 8 00:48:14.717: INFO: (7) /api/v1/namespaces/proxy-970/pods/https:proxy-service-782g5-crxd9:462/proxy/: tls qux (200; 6.413769ms) Apr 8 00:48:14.717: INFO: (7) /api/v1/namespaces/proxy-970/pods/https:proxy-service-782g5-crxd9:443/proxy/: testt... (200; 4.388877ms) Apr 8 00:48:14.722: INFO: (8) /api/v1/namespaces/proxy-970/pods/https:proxy-service-782g5-crxd9:462/proxy/: tls qux (200; 4.381921ms) Apr 8 00:48:14.722: INFO: (8) /api/v1/namespaces/proxy-970/pods/https:proxy-service-782g5-crxd9:443/proxy/: test (200; 4.464039ms) Apr 8 00:48:14.722: INFO: (8) /api/v1/namespaces/proxy-970/pods/http:proxy-service-782g5-crxd9:162/proxy/: bar (200; 4.53353ms) Apr 8 00:48:14.722: INFO: (8) /api/v1/namespaces/proxy-970/pods/https:proxy-service-782g5-crxd9:460/proxy/: tls baz (200; 4.579617ms) Apr 8 00:48:14.723: INFO: (8) /api/v1/namespaces/proxy-970/services/http:proxy-service-782g5:portname2/proxy/: bar (200; 5.334593ms) Apr 8 00:48:14.723: INFO: (8) /api/v1/namespaces/proxy-970/services/http:proxy-service-782g5:portname1/proxy/: foo (200; 5.36656ms) Apr 8 00:48:14.723: INFO: (8) /api/v1/namespaces/proxy-970/services/https:proxy-service-782g5:tlsportname2/proxy/: tls qux (200; 5.40885ms) Apr 8 00:48:14.723: INFO: (8) /api/v1/namespaces/proxy-970/services/proxy-service-782g5:portname2/proxy/: bar (200; 5.420439ms) Apr 8 00:48:14.723: INFO: (8) /api/v1/namespaces/proxy-970/services/proxy-service-782g5:portname1/proxy/: foo (200; 5.444015ms) Apr 8 00:48:14.723: INFO: (8) /api/v1/namespaces/proxy-970/services/https:proxy-service-782g5:tlsportname1/proxy/: tls baz (200; 5.562229ms) Apr 8 00:48:14.726: INFO: (9) /api/v1/namespaces/proxy-970/pods/https:proxy-service-782g5-crxd9:462/proxy/: tls qux (200; 2.911526ms) Apr 8 00:48:14.726: INFO: (9) /api/v1/namespaces/proxy-970/pods/http:proxy-service-782g5-crxd9:160/proxy/: foo (200; 3.055409ms) Apr 8 00:48:14.726: INFO: (9) /api/v1/namespaces/proxy-970/pods/http:proxy-service-782g5-crxd9:162/proxy/: bar (200; 3.23804ms) Apr 8 00:48:14.726: INFO: (9) /api/v1/namespaces/proxy-970/pods/https:proxy-service-782g5-crxd9:460/proxy/: tls baz (200; 3.25781ms) Apr 8 00:48:14.726: INFO: (9) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9:162/proxy/: bar (200; 3.290573ms) Apr 8 00:48:14.726: INFO: (9) /api/v1/namespaces/proxy-970/pods/http:proxy-service-782g5-crxd9:1080/proxy/: t... (200; 3.456156ms) Apr 8 00:48:14.726: INFO: (9) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9/proxy/: test (200; 3.487355ms) Apr 8 00:48:14.727: INFO: (9) /api/v1/namespaces/proxy-970/pods/https:proxy-service-782g5-crxd9:443/proxy/: testt... (200; 1.750943ms) Apr 8 00:48:14.731: INFO: (10) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9:162/proxy/: bar (200; 3.006341ms) Apr 8 00:48:14.731: INFO: (10) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9/proxy/: test (200; 3.017319ms) Apr 8 00:48:14.731: INFO: (10) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9:160/proxy/: foo (200; 3.09993ms) Apr 8 00:48:14.731: INFO: (10) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9:1080/proxy/: testtest (200; 2.676123ms) Apr 8 00:48:14.735: INFO: (11) /api/v1/namespaces/proxy-970/pods/https:proxy-service-782g5-crxd9:460/proxy/: tls baz (200; 2.950959ms) Apr 8 00:48:14.736: INFO: (11) /api/v1/namespaces/proxy-970/pods/http:proxy-service-782g5-crxd9:162/proxy/: bar (200; 4.099718ms) Apr 8 00:48:14.736: INFO: (11) /api/v1/namespaces/proxy-970/services/https:proxy-service-782g5:tlsportname1/proxy/: tls baz (200; 4.094893ms) Apr 8 00:48:14.736: INFO: (11) /api/v1/namespaces/proxy-970/services/http:proxy-service-782g5:portname2/proxy/: bar (200; 4.055945ms) Apr 8 00:48:14.736: INFO: (11) /api/v1/namespaces/proxy-970/pods/http:proxy-service-782g5-crxd9:160/proxy/: foo (200; 4.089051ms) Apr 8 00:48:14.736: INFO: (11) /api/v1/namespaces/proxy-970/services/proxy-service-782g5:portname1/proxy/: foo (200; 4.094712ms) Apr 8 00:48:14.736: INFO: (11) /api/v1/namespaces/proxy-970/pods/http:proxy-service-782g5-crxd9:1080/proxy/: t... (200; 4.07339ms) Apr 8 00:48:14.736: INFO: (11) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9:1080/proxy/: testt... (200; 4.080944ms) Apr 8 00:48:14.741: INFO: (12) /api/v1/namespaces/proxy-970/services/https:proxy-service-782g5:tlsportname2/proxy/: tls qux (200; 4.096959ms) Apr 8 00:48:14.741: INFO: (12) /api/v1/namespaces/proxy-970/services/proxy-service-782g5:portname2/proxy/: bar (200; 4.06803ms) Apr 8 00:48:14.741: INFO: (12) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9/proxy/: test (200; 4.213744ms) Apr 8 00:48:14.741: INFO: (12) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9:1080/proxy/: testtest (200; 4.140331ms) Apr 8 00:48:14.746: INFO: (13) /api/v1/namespaces/proxy-970/pods/http:proxy-service-782g5-crxd9:1080/proxy/: t... (200; 4.216566ms) Apr 8 00:48:14.746: INFO: (13) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9:1080/proxy/: testtest (200; 3.340136ms) Apr 8 00:48:14.751: INFO: (14) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9:1080/proxy/: testt... (200; 3.383339ms) Apr 8 00:48:14.751: INFO: (14) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9:162/proxy/: bar (200; 3.428019ms) Apr 8 00:48:14.751: INFO: (14) /api/v1/namespaces/proxy-970/pods/https:proxy-service-782g5-crxd9:443/proxy/: testtest (200; 5.167201ms) Apr 8 00:48:14.758: INFO: (15) /api/v1/namespaces/proxy-970/pods/http:proxy-service-782g5-crxd9:1080/proxy/: t... (200; 5.059322ms) Apr 8 00:48:14.758: INFO: (15) /api/v1/namespaces/proxy-970/pods/https:proxy-service-782g5-crxd9:443/proxy/: test (200; 3.71748ms) Apr 8 00:48:14.766: INFO: (16) /api/v1/namespaces/proxy-970/pods/http:proxy-service-782g5-crxd9:1080/proxy/: t... (200; 3.742869ms) Apr 8 00:48:14.766: INFO: (16) /api/v1/namespaces/proxy-970/pods/https:proxy-service-782g5-crxd9:462/proxy/: tls qux (200; 3.63339ms) Apr 8 00:48:14.766: INFO: (16) /api/v1/namespaces/proxy-970/pods/http:proxy-service-782g5-crxd9:160/proxy/: foo (200; 3.872686ms) Apr 8 00:48:14.766: INFO: (16) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9:162/proxy/: bar (200; 3.838209ms) Apr 8 00:48:14.771: INFO: (16) /api/v1/namespaces/proxy-970/pods/https:proxy-service-782g5-crxd9:460/proxy/: tls baz (200; 8.783435ms) Apr 8 00:48:14.771: INFO: (16) /api/v1/namespaces/proxy-970/pods/https:proxy-service-782g5-crxd9:443/proxy/: testt... (200; 2.467441ms) Apr 8 00:48:14.775: INFO: (17) /api/v1/namespaces/proxy-970/pods/https:proxy-service-782g5-crxd9:443/proxy/: test (200; 4.689237ms) Apr 8 00:48:14.777: INFO: (17) /api/v1/namespaces/proxy-970/pods/http:proxy-service-782g5-crxd9:160/proxy/: foo (200; 4.693702ms) Apr 8 00:48:14.777: INFO: (17) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9:1080/proxy/: testtestt... (200; 3.460546ms) Apr 8 00:48:14.781: INFO: (18) /api/v1/namespaces/proxy-970/pods/https:proxy-service-782g5-crxd9:462/proxy/: tls qux (200; 3.509546ms) Apr 8 00:48:14.781: INFO: (18) /api/v1/namespaces/proxy-970/pods/http:proxy-service-782g5-crxd9:160/proxy/: foo (200; 3.427542ms) Apr 8 00:48:14.781: INFO: (18) /api/v1/namespaces/proxy-970/pods/https:proxy-service-782g5-crxd9:460/proxy/: tls baz (200; 3.537356ms) Apr 8 00:48:14.783: INFO: (18) /api/v1/namespaces/proxy-970/services/https:proxy-service-782g5:tlsportname2/proxy/: tls qux (200; 5.54428ms) Apr 8 00:48:14.784: INFO: (18) /api/v1/namespaces/proxy-970/services/proxy-service-782g5:portname2/proxy/: bar (200; 5.743216ms) Apr 8 00:48:14.784: INFO: (18) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9:160/proxy/: foo (200; 6.241688ms) Apr 8 00:48:14.784: INFO: (18) /api/v1/namespaces/proxy-970/services/proxy-service-782g5:portname1/proxy/: foo (200; 6.349202ms) Apr 8 00:48:14.784: INFO: (18) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9:162/proxy/: bar (200; 6.341308ms) Apr 8 00:48:14.784: INFO: (18) /api/v1/namespaces/proxy-970/pods/http:proxy-service-782g5-crxd9:162/proxy/: bar (200; 6.311169ms) Apr 8 00:48:14.784: INFO: (18) /api/v1/namespaces/proxy-970/services/http:proxy-service-782g5:portname2/proxy/: bar (200; 6.370637ms) Apr 8 00:48:14.784: INFO: (18) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9/proxy/: test (200; 6.360839ms) Apr 8 00:48:14.784: INFO: (18) /api/v1/namespaces/proxy-970/services/https:proxy-service-782g5:tlsportname1/proxy/: tls baz (200; 6.3963ms) Apr 8 00:48:14.784: INFO: (18) /api/v1/namespaces/proxy-970/services/http:proxy-service-782g5:portname1/proxy/: foo (200; 6.581825ms) Apr 8 00:48:14.788: INFO: (19) /api/v1/namespaces/proxy-970/pods/http:proxy-service-782g5-crxd9:162/proxy/: bar (200; 3.197219ms) Apr 8 00:48:14.788: INFO: (19) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9:162/proxy/: bar (200; 3.350442ms) Apr 8 00:48:14.788: INFO: (19) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9/proxy/: test (200; 3.786658ms) Apr 8 00:48:14.788: INFO: (19) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9:160/proxy/: foo (200; 3.757328ms) Apr 8 00:48:14.788: INFO: (19) /api/v1/namespaces/proxy-970/pods/http:proxy-service-782g5-crxd9:160/proxy/: foo (200; 3.87328ms) Apr 8 00:48:14.788: INFO: (19) /api/v1/namespaces/proxy-970/pods/https:proxy-service-782g5-crxd9:443/proxy/: t... (200; 3.856162ms) Apr 8 00:48:14.788: INFO: (19) /api/v1/namespaces/proxy-970/pods/proxy-service-782g5-crxd9:1080/proxy/: test>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:48:23.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-442" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":257,"skipped":4444,"failed":0} SSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:48:23.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Apr 8 00:48:29.728: INFO: 0 pods remaining Apr 8 00:48:29.728: INFO: 0 pods has nil DeletionTimestamp Apr 8 00:48:29.728: INFO: Apr 8 00:48:30.340: INFO: 0 pods remaining Apr 8 00:48:30.340: INFO: 0 pods has nil DeletionTimestamp Apr 8 00:48:30.340: INFO: STEP: Gathering metrics W0408 00:48:31.547778 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 8 00:48:31.547: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:48:31.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8378" for this suite. • [SLOW TEST:8.592 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":258,"skipped":4448,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:48:31.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 8 00:48:32.124: INFO: Waiting up to 5m0s for pod "downwardapi-volume-24b5e7f1-0a7c-4b50-a5cf-da84305f6620" in namespace "downward-api-5350" to be "Succeeded or Failed" Apr 8 00:48:32.188: INFO: Pod "downwardapi-volume-24b5e7f1-0a7c-4b50-a5cf-da84305f6620": Phase="Pending", Reason="", readiness=false. Elapsed: 64.864574ms Apr 8 00:48:34.223: INFO: Pod "downwardapi-volume-24b5e7f1-0a7c-4b50-a5cf-da84305f6620": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099295092s Apr 8 00:48:36.227: INFO: Pod "downwardapi-volume-24b5e7f1-0a7c-4b50-a5cf-da84305f6620": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.103742801s STEP: Saw pod success Apr 8 00:48:36.227: INFO: Pod "downwardapi-volume-24b5e7f1-0a7c-4b50-a5cf-da84305f6620" satisfied condition "Succeeded or Failed" Apr 8 00:48:36.231: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-24b5e7f1-0a7c-4b50-a5cf-da84305f6620 container client-container: STEP: delete the pod Apr 8 00:48:36.250: INFO: Waiting for pod downwardapi-volume-24b5e7f1-0a7c-4b50-a5cf-da84305f6620 to disappear Apr 8 00:48:36.270: INFO: Pod downwardapi-volume-24b5e7f1-0a7c-4b50-a5cf-da84305f6620 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:48:36.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5350" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":259,"skipped":4458,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:48:36.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:48:36.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3995" for this suite. STEP: Destroying namespace "nspatchtest-59e12be3-590c-4731-bcec-4f4ead3d43f8-3014" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":260,"skipped":4474,"failed":0} SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:48:36.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-2503 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Apr 8 00:48:36.535: INFO: Found 0 stateful pods, waiting for 3 Apr 8 00:48:46.541: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 8 00:48:46.541: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 8 00:48:46.541: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 8 00:48:46.566: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 8 00:48:56.689: INFO: Updating stateful set ss2 Apr 8 00:48:56.730: INFO: Waiting for Pod statefulset-2503/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Apr 8 00:49:06.836: INFO: Found 2 stateful pods, waiting for 3 Apr 8 00:49:16.842: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 8 00:49:16.842: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 8 00:49:16.842: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 8 00:49:16.866: INFO: Updating stateful set ss2 Apr 8 00:49:16.893: INFO: Waiting for Pod statefulset-2503/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 8 00:49:26.916: INFO: Updating stateful set ss2 Apr 8 00:49:26.967: INFO: Waiting for StatefulSet statefulset-2503/ss2 to complete update Apr 8 00:49:26.967: INFO: Waiting for Pod statefulset-2503/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 8 00:49:36.974: INFO: Deleting all statefulset in ns statefulset-2503 Apr 8 00:49:36.977: INFO: Scaling statefulset ss2 to 0 Apr 8 00:49:46.996: INFO: Waiting for statefulset status.replicas updated to 0 Apr 8 00:49:46.999: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:49:47.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2503" for this suite. • [SLOW TEST:70.586 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":261,"skipped":4481,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:49:47.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-df7b9253-e9c3-4681-81de-6feb910fd548 STEP: Creating a pod to test consume secrets Apr 8 00:49:47.122: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-52a07438-2488-4960-9929-5f5e198abfa2" in namespace "projected-8202" to be "Succeeded or Failed" Apr 8 00:49:47.130: INFO: Pod "pod-projected-secrets-52a07438-2488-4960-9929-5f5e198abfa2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.121655ms Apr 8 00:49:49.143: INFO: Pod "pod-projected-secrets-52a07438-2488-4960-9929-5f5e198abfa2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020584955s Apr 8 00:49:51.148: INFO: Pod "pod-projected-secrets-52a07438-2488-4960-9929-5f5e198abfa2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025471936s STEP: Saw pod success Apr 8 00:49:51.148: INFO: Pod "pod-projected-secrets-52a07438-2488-4960-9929-5f5e198abfa2" satisfied condition "Succeeded or Failed" Apr 8 00:49:51.150: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-52a07438-2488-4960-9929-5f5e198abfa2 container projected-secret-volume-test: STEP: delete the pod Apr 8 00:49:51.185: INFO: Waiting for pod pod-projected-secrets-52a07438-2488-4960-9929-5f5e198abfa2 to disappear Apr 8 00:49:51.203: INFO: Pod pod-projected-secrets-52a07438-2488-4960-9929-5f5e198abfa2 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:49:51.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8202" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":262,"skipped":4482,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:49:51.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-f7mg STEP: Creating a pod to test atomic-volume-subpath Apr 8 00:49:51.306: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-f7mg" in namespace "subpath-9753" to be "Succeeded or Failed" Apr 8 00:49:51.310: INFO: Pod "pod-subpath-test-configmap-f7mg": Phase="Pending", Reason="", readiness=false. Elapsed: 3.954465ms Apr 8 00:49:53.313: INFO: Pod "pod-subpath-test-configmap-f7mg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007390523s Apr 8 00:49:55.317: INFO: Pod "pod-subpath-test-configmap-f7mg": Phase="Running", Reason="", readiness=true. Elapsed: 4.011440938s Apr 8 00:49:57.322: INFO: Pod "pod-subpath-test-configmap-f7mg": Phase="Running", Reason="", readiness=true. Elapsed: 6.015686329s Apr 8 00:49:59.326: INFO: Pod "pod-subpath-test-configmap-f7mg": Phase="Running", Reason="", readiness=true. Elapsed: 8.020045632s Apr 8 00:50:01.330: INFO: Pod "pod-subpath-test-configmap-f7mg": Phase="Running", Reason="", readiness=true. Elapsed: 10.02407906s Apr 8 00:50:03.334: INFO: Pod "pod-subpath-test-configmap-f7mg": Phase="Running", Reason="", readiness=true. Elapsed: 12.028176939s Apr 8 00:50:05.339: INFO: Pod "pod-subpath-test-configmap-f7mg": Phase="Running", Reason="", readiness=true. Elapsed: 14.032616901s Apr 8 00:50:07.343: INFO: Pod "pod-subpath-test-configmap-f7mg": Phase="Running", Reason="", readiness=true. Elapsed: 16.036931121s Apr 8 00:50:09.347: INFO: Pod "pod-subpath-test-configmap-f7mg": Phase="Running", Reason="", readiness=true. Elapsed: 18.041049502s Apr 8 00:50:11.351: INFO: Pod "pod-subpath-test-configmap-f7mg": Phase="Running", Reason="", readiness=true. Elapsed: 20.045251564s Apr 8 00:50:13.356: INFO: Pod "pod-subpath-test-configmap-f7mg": Phase="Running", Reason="", readiness=true. Elapsed: 22.049584145s Apr 8 00:50:15.374: INFO: Pod "pod-subpath-test-configmap-f7mg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.067797312s STEP: Saw pod success Apr 8 00:50:15.374: INFO: Pod "pod-subpath-test-configmap-f7mg" satisfied condition "Succeeded or Failed" Apr 8 00:50:15.376: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-f7mg container test-container-subpath-configmap-f7mg: STEP: delete the pod Apr 8 00:50:15.408: INFO: Waiting for pod pod-subpath-test-configmap-f7mg to disappear Apr 8 00:50:15.425: INFO: Pod pod-subpath-test-configmap-f7mg no longer exists STEP: Deleting pod pod-subpath-test-configmap-f7mg Apr 8 00:50:15.425: INFO: Deleting pod "pod-subpath-test-configmap-f7mg" in namespace "subpath-9753" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:50:15.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9753" for this suite. • [SLOW TEST:24.225 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":263,"skipped":4493,"failed":0} [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:50:15.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288 STEP: creating an pod Apr 8 00:50:15.474: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-1853 -- logs-generator --log-lines-total 100 --run-duration 20s' Apr 8 00:50:17.941: INFO: stderr: "" Apr 8 00:50:17.941: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Waiting for log generator to start. Apr 8 00:50:17.941: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Apr 8 00:50:17.941: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-1853" to be "running and ready, or succeeded" Apr 8 00:50:17.969: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 28.382551ms Apr 8 00:50:19.974: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032764359s Apr 8 00:50:21.979: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.037465885s Apr 8 00:50:21.979: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Apr 8 00:50:21.979: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Apr 8 00:50:21.979: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1853' Apr 8 00:50:22.100: INFO: stderr: "" Apr 8 00:50:22.100: INFO: stdout: "I0408 00:50:20.058652 1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/xv6x 269\nI0408 00:50:20.258788 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/rjx 446\nI0408 00:50:20.458769 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/bh7 583\nI0408 00:50:20.658872 1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/l5v 367\nI0408 00:50:20.858857 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/g84h 413\nI0408 00:50:21.058878 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/7d8s 365\nI0408 00:50:21.258885 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/mxp 247\nI0408 00:50:21.458870 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/z5j8 321\nI0408 00:50:21.658814 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/jfh 495\nI0408 00:50:21.858868 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/n5d2 443\nI0408 00:50:22.058856 1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/gdd 316\n" STEP: limiting log lines Apr 8 00:50:22.100: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1853 --tail=1' Apr 8 00:50:22.214: INFO: stderr: "" Apr 8 00:50:22.214: INFO: stdout: "I0408 00:50:22.058856 1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/gdd 316\n" Apr 8 00:50:22.214: INFO: got output "I0408 00:50:22.058856 1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/gdd 316\n" STEP: limiting log bytes Apr 8 00:50:22.214: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1853 --limit-bytes=1' Apr 8 00:50:22.352: INFO: stderr: "" Apr 8 00:50:22.352: INFO: stdout: "I" Apr 8 00:50:22.352: INFO: got output "I" STEP: exposing timestamps Apr 8 00:50:22.352: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1853 --tail=1 --timestamps' Apr 8 00:50:22.451: INFO: stderr: "" Apr 8 00:50:22.451: INFO: stdout: "2020-04-08T00:50:22.259028812Z I0408 00:50:22.258868 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/dt2d 339\n" Apr 8 00:50:22.451: INFO: got output "2020-04-08T00:50:22.259028812Z I0408 00:50:22.258868 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/dt2d 339\n" STEP: restricting to a time range Apr 8 00:50:24.952: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1853 --since=1s' Apr 8 00:50:25.073: INFO: stderr: "" Apr 8 00:50:25.073: INFO: stdout: "I0408 00:50:24.258888 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/fzw 376\nI0408 00:50:24.459019 1 logs_generator.go:76] 22 GET /api/v1/namespaces/default/pods/9wdk 259\nI0408 00:50:24.658887 1 logs_generator.go:76] 23 GET /api/v1/namespaces/ns/pods/r8g9 511\nI0408 00:50:24.858854 1 logs_generator.go:76] 24 POST /api/v1/namespaces/ns/pods/mjv 342\nI0408 00:50:25.058828 1 logs_generator.go:76] 25 POST /api/v1/namespaces/kube-system/pods/8m2 520\n" Apr 8 00:50:25.073: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1853 --since=24h' Apr 8 00:50:25.183: INFO: stderr: "" Apr 8 00:50:25.183: INFO: stdout: "I0408 00:50:20.058652 1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/xv6x 269\nI0408 00:50:20.258788 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/rjx 446\nI0408 00:50:20.458769 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/bh7 583\nI0408 00:50:20.658872 1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/l5v 367\nI0408 00:50:20.858857 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/g84h 413\nI0408 00:50:21.058878 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/7d8s 365\nI0408 00:50:21.258885 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/mxp 247\nI0408 00:50:21.458870 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/z5j8 321\nI0408 00:50:21.658814 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/jfh 495\nI0408 00:50:21.858868 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/n5d2 443\nI0408 00:50:22.058856 1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/gdd 316\nI0408 00:50:22.258868 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/dt2d 339\nI0408 00:50:22.458855 1 logs_generator.go:76] 12 POST /api/v1/namespaces/default/pods/4zv 333\nI0408 00:50:22.658862 1 logs_generator.go:76] 13 GET /api/v1/namespaces/kube-system/pods/4hr 421\nI0408 00:50:22.858822 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/default/pods/tv7 554\nI0408 00:50:23.058818 1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/d4l8 578\nI0408 00:50:23.258867 1 logs_generator.go:76] 16 POST /api/v1/namespaces/default/pods/8pn 486\nI0408 00:50:23.458844 1 logs_generator.go:76] 17 POST /api/v1/namespaces/kube-system/pods/tt4k 580\nI0408 00:50:23.658838 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/5g92 372\nI0408 00:50:23.858846 1 logs_generator.go:76] 19 GET /api/v1/namespaces/ns/pods/v4kv 392\nI0408 00:50:24.058836 1 logs_generator.go:76] 20 POST /api/v1/namespaces/ns/pods/q68 511\nI0408 00:50:24.258888 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/fzw 376\nI0408 00:50:24.459019 1 logs_generator.go:76] 22 GET /api/v1/namespaces/default/pods/9wdk 259\nI0408 00:50:24.658887 1 logs_generator.go:76] 23 GET /api/v1/namespaces/ns/pods/r8g9 511\nI0408 00:50:24.858854 1 logs_generator.go:76] 24 POST /api/v1/namespaces/ns/pods/mjv 342\nI0408 00:50:25.058828 1 logs_generator.go:76] 25 POST /api/v1/namespaces/kube-system/pods/8m2 520\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294 Apr 8 00:50:25.183: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-1853' Apr 8 00:50:27.300: INFO: stderr: "" Apr 8 00:50:27.300: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:50:27.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1853" for this suite. • [SLOW TEST:11.871 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":275,"completed":264,"skipped":4493,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:50:27.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 8 00:50:35.492: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 8 00:50:35.496: INFO: Pod pod-with-prestop-http-hook still exists Apr 8 00:50:37.496: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 8 00:50:37.501: INFO: Pod pod-with-prestop-http-hook still exists Apr 8 00:50:39.496: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 8 00:50:39.501: INFO: Pod pod-with-prestop-http-hook still exists Apr 8 00:50:41.497: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 8 00:50:41.501: INFO: Pod pod-with-prestop-http-hook still exists Apr 8 00:50:43.496: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 8 00:50:43.501: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:50:43.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4041" for this suite. • [SLOW TEST:16.206 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":265,"skipped":4506,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:50:43.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating server pod server in namespace prestop-8297 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-8297 STEP: Deleting pre-stop pod Apr 8 00:50:56.606: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:50:56.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-8297" for this suite. • [SLOW TEST:13.110 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":275,"completed":266,"skipped":4558,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:50:56.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:50:56.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8203" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":267,"skipped":4567,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:50:56.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-f4eff9c1-5803-4b68-8443-179850299fca STEP: Creating a pod to test consume configMaps Apr 8 00:50:57.278: INFO: Waiting up to 5m0s for pod "pod-configmaps-7b100bc7-80b8-4481-977d-6ed60fe32ff1" in namespace "configmap-7088" to be "Succeeded or Failed" Apr 8 00:50:57.282: INFO: Pod "pod-configmaps-7b100bc7-80b8-4481-977d-6ed60fe32ff1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.421555ms Apr 8 00:50:59.321: INFO: Pod "pod-configmaps-7b100bc7-80b8-4481-977d-6ed60fe32ff1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04278661s Apr 8 00:51:01.325: INFO: Pod "pod-configmaps-7b100bc7-80b8-4481-977d-6ed60fe32ff1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046439184s STEP: Saw pod success Apr 8 00:51:01.325: INFO: Pod "pod-configmaps-7b100bc7-80b8-4481-977d-6ed60fe32ff1" satisfied condition "Succeeded or Failed" Apr 8 00:51:01.328: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-7b100bc7-80b8-4481-977d-6ed60fe32ff1 container configmap-volume-test: STEP: delete the pod Apr 8 00:51:01.354: INFO: Waiting for pod pod-configmaps-7b100bc7-80b8-4481-977d-6ed60fe32ff1 to disappear Apr 8 00:51:01.360: INFO: Pod pod-configmaps-7b100bc7-80b8-4481-977d-6ed60fe32ff1 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:51:01.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7088" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":268,"skipped":4570,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:51:01.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 8 00:51:02.000: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 8 00:51:04.011: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721903862, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721903862, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721903862, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721903861, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 8 00:51:07.065: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 8 00:51:07.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:51:08.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-4300" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:6.929 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":269,"skipped":4586,"failed":0} SSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:51:08.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap that has name configmap-test-emptyKey-387888f8-eb3d-43fc-83a6-dfb506c3cf53 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:51:08.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3407" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":270,"skipped":4594,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:51:08.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 8 00:51:08.429: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2cd769f6-22c8-45b8-8168-485bbeb9f282" in namespace "projected-5382" to be "Succeeded or Failed" Apr 8 00:51:08.432: INFO: Pod "downwardapi-volume-2cd769f6-22c8-45b8-8168-485bbeb9f282": Phase="Pending", Reason="", readiness=false. Elapsed: 3.277017ms Apr 8 00:51:10.436: INFO: Pod "downwardapi-volume-2cd769f6-22c8-45b8-8168-485bbeb9f282": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007136386s Apr 8 00:51:12.439: INFO: Pod "downwardapi-volume-2cd769f6-22c8-45b8-8168-485bbeb9f282": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0103626s STEP: Saw pod success Apr 8 00:51:12.439: INFO: Pod "downwardapi-volume-2cd769f6-22c8-45b8-8168-485bbeb9f282" satisfied condition "Succeeded or Failed" Apr 8 00:51:12.442: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-2cd769f6-22c8-45b8-8168-485bbeb9f282 container client-container: STEP: delete the pod Apr 8 00:51:12.458: INFO: Waiting for pod downwardapi-volume-2cd769f6-22c8-45b8-8168-485bbeb9f282 to disappear Apr 8 00:51:12.495: INFO: Pod downwardapi-volume-2cd769f6-22c8-45b8-8168-485bbeb9f282 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:51:12.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5382" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":271,"skipped":4609,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:51:12.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 8 00:51:12.548: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 8 00:51:12.559: INFO: Waiting for terminating namespaces to be deleted... Apr 8 00:51:12.562: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 8 00:51:12.566: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 8 00:51:12.566: INFO: Container kindnet-cni ready: true, restart count 0 Apr 8 00:51:12.566: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 8 00:51:12.566: INFO: Container kube-proxy ready: true, restart count 0 Apr 8 00:51:12.566: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 8 00:51:12.570: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 8 00:51:12.570: INFO: Container kindnet-cni ready: true, restart count 0 Apr 8 00:51:12.570: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 8 00:51:12.570: INFO: Container kube-proxy ready: true, restart count 0 Apr 8 00:51:12.570: INFO: tester from prestop-8297 started at 2020-04-08 00:50:47 +0000 UTC (1 container statuses recorded) Apr 8 00:51:12.570: INFO: Container tester ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Apr 8 00:51:12.638: INFO: Pod kindnet-vnjgh requesting resource cpu=100m on Node latest-worker Apr 8 00:51:12.638: INFO: Pod kindnet-zq6gp requesting resource cpu=100m on Node latest-worker2 Apr 8 00:51:12.638: INFO: Pod kube-proxy-c5xlk requesting resource cpu=0m on Node latest-worker2 Apr 8 00:51:12.638: INFO: Pod kube-proxy-s9v6p requesting resource cpu=0m on Node latest-worker Apr 8 00:51:12.638: INFO: Pod tester requesting resource cpu=0m on Node latest-worker2 STEP: Starting Pods to consume most of the cluster CPU. Apr 8 00:51:12.639: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker Apr 8 00:51:12.644: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-09a98d52-5896-4107-b32e-3fba8cc69227.1603b1e0c2346fa3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8810/filler-pod-09a98d52-5896-4107-b32e-3fba8cc69227 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-09a98d52-5896-4107-b32e-3fba8cc69227.1603b1e10fdff934], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-09a98d52-5896-4107-b32e-3fba8cc69227.1603b1e14f616e69], Reason = [Created], Message = [Created container filler-pod-09a98d52-5896-4107-b32e-3fba8cc69227] STEP: Considering event: Type = [Normal], Name = [filler-pod-09a98d52-5896-4107-b32e-3fba8cc69227.1603b1e166870243], Reason = [Started], Message = [Started container filler-pod-09a98d52-5896-4107-b32e-3fba8cc69227] STEP: Considering event: Type = [Normal], Name = [filler-pod-3dda4ad5-4534-423d-8350-e2b746f69212.1603b1e0c4123d06], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8810/filler-pod-3dda4ad5-4534-423d-8350-e2b746f69212 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-3dda4ad5-4534-423d-8350-e2b746f69212.1603b1e14bed336d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-3dda4ad5-4534-423d-8350-e2b746f69212.1603b1e1749ae3a1], Reason = [Created], Message = [Created container filler-pod-3dda4ad5-4534-423d-8350-e2b746f69212] STEP: Considering event: Type = [Normal], Name = [filler-pod-3dda4ad5-4534-423d-8350-e2b746f69212.1603b1e18256be0a], Reason = [Started], Message = [Started container filler-pod-3dda4ad5-4534-423d-8350-e2b746f69212] STEP: Considering event: Type = [Warning], Name = [additional-pod.1603b1e1b3988a59], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:51:17.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8810" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:5.324 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":275,"completed":272,"skipped":4625,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:51:17.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-1904 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-1904 STEP: creating replication controller externalsvc in namespace services-1904 I0408 00:51:18.037936 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-1904, replica count: 2 I0408 00:51:21.088394 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0408 00:51:24.089449 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Apr 8 00:51:24.154: INFO: Creating new exec pod Apr 8 00:51:28.317: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-1904 execpod5brcr -- /bin/sh -x -c nslookup clusterip-service' Apr 8 00:51:28.553: INFO: stderr: "I0408 00:51:28.449313 3377 log.go:172] (0xc00058a000) (0xc000643680) Create stream\nI0408 00:51:28.449386 3377 log.go:172] (0xc00058a000) (0xc000643680) Stream added, broadcasting: 1\nI0408 00:51:28.453291 3377 log.go:172] (0xc00058a000) Reply frame received for 1\nI0408 00:51:28.453340 3377 log.go:172] (0xc00058a000) (0xc000508aa0) Create stream\nI0408 00:51:28.453355 3377 log.go:172] (0xc00058a000) (0xc000508aa0) Stream added, broadcasting: 3\nI0408 00:51:28.454637 3377 log.go:172] (0xc00058a000) Reply frame received for 3\nI0408 00:51:28.454684 3377 log.go:172] (0xc00058a000) (0xc000934000) Create stream\nI0408 00:51:28.454694 3377 log.go:172] (0xc00058a000) (0xc000934000) Stream added, broadcasting: 5\nI0408 00:51:28.455583 3377 log.go:172] (0xc00058a000) Reply frame received for 5\nI0408 00:51:28.534226 3377 log.go:172] (0xc00058a000) Data frame received for 5\nI0408 00:51:28.534257 3377 log.go:172] (0xc000934000) (5) Data frame handling\nI0408 00:51:28.534277 3377 log.go:172] (0xc000934000) (5) Data frame sent\n+ nslookup clusterip-service\nI0408 00:51:28.544043 3377 log.go:172] (0xc00058a000) Data frame received for 3\nI0408 00:51:28.544078 3377 log.go:172] (0xc000508aa0) (3) Data frame handling\nI0408 00:51:28.544100 3377 log.go:172] (0xc000508aa0) (3) Data frame sent\nI0408 00:51:28.545888 3377 log.go:172] (0xc00058a000) Data frame received for 3\nI0408 00:51:28.545916 3377 log.go:172] (0xc000508aa0) (3) Data frame handling\nI0408 00:51:28.545941 3377 log.go:172] (0xc000508aa0) (3) Data frame sent\nI0408 00:51:28.546227 3377 log.go:172] (0xc00058a000) Data frame received for 3\nI0408 00:51:28.546250 3377 log.go:172] (0xc000508aa0) (3) Data frame handling\nI0408 00:51:28.546322 3377 log.go:172] (0xc00058a000) Data frame received for 5\nI0408 00:51:28.546364 3377 log.go:172] (0xc000934000) (5) Data frame handling\nI0408 00:51:28.548023 3377 log.go:172] (0xc00058a000) Data frame received for 1\nI0408 00:51:28.548057 3377 log.go:172] (0xc000643680) (1) Data frame handling\nI0408 00:51:28.548093 3377 log.go:172] (0xc000643680) (1) Data frame sent\nI0408 00:51:28.548126 3377 log.go:172] (0xc00058a000) (0xc000643680) Stream removed, broadcasting: 1\nI0408 00:51:28.548163 3377 log.go:172] (0xc00058a000) Go away received\nI0408 00:51:28.548501 3377 log.go:172] (0xc00058a000) (0xc000643680) Stream removed, broadcasting: 1\nI0408 00:51:28.548519 3377 log.go:172] (0xc00058a000) (0xc000508aa0) Stream removed, broadcasting: 3\nI0408 00:51:28.548528 3377 log.go:172] (0xc00058a000) (0xc000934000) Stream removed, broadcasting: 5\n" Apr 8 00:51:28.553: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-1904.svc.cluster.local\tcanonical name = externalsvc.services-1904.svc.cluster.local.\nName:\texternalsvc.services-1904.svc.cluster.local\nAddress: 10.96.112.112\n\n" STEP: deleting ReplicationController externalsvc in namespace services-1904, will wait for the garbage collector to delete the pods Apr 8 00:51:28.613: INFO: Deleting ReplicationController externalsvc took: 6.847458ms Apr 8 00:51:28.913: INFO: Terminating ReplicationController externalsvc pods took: 300.211339ms Apr 8 00:51:33.564: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:51:33.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1904" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:15.772 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":273,"skipped":4653,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:51:33.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service endpoint-test2 in namespace services-7038 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7038 to expose endpoints map[] Apr 8 00:51:33.708: INFO: Get endpoints failed (21.297668ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Apr 8 00:51:34.712: INFO: successfully validated that service endpoint-test2 in namespace services-7038 exposes endpoints map[] (1.025219886s elapsed) STEP: Creating pod pod1 in namespace services-7038 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7038 to expose endpoints map[pod1:[80]] Apr 8 00:51:37.818: INFO: successfully validated that service endpoint-test2 in namespace services-7038 exposes endpoints map[pod1:[80]] (3.099121773s elapsed) STEP: Creating pod pod2 in namespace services-7038 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7038 to expose endpoints map[pod1:[80] pod2:[80]] Apr 8 00:51:41.984: INFO: successfully validated that service endpoint-test2 in namespace services-7038 exposes endpoints map[pod1:[80] pod2:[80]] (4.162351878s elapsed) STEP: Deleting pod pod1 in namespace services-7038 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7038 to expose endpoints map[pod2:[80]] Apr 8 00:51:43.010: INFO: successfully validated that service endpoint-test2 in namespace services-7038 exposes endpoints map[pod2:[80]] (1.021367839s elapsed) STEP: Deleting pod pod2 in namespace services-7038 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7038 to expose endpoints map[] Apr 8 00:51:44.024: INFO: successfully validated that service endpoint-test2 in namespace services-7038 exposes endpoints map[] (1.008750001s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:51:44.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7038" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:10.809 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":275,"completed":274,"skipped":4683,"failed":0} SS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 00:51:44.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-878313d7-ab77-469f-8e71-b2e46bd9860d STEP: Creating secret with name s-test-opt-upd-3c996c21-30f1-4e1a-8d7e-8756bc3a9b90 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-878313d7-ab77-469f-8e71-b2e46bd9860d STEP: Updating secret s-test-opt-upd-3c996c21-30f1-4e1a-8d7e-8756bc3a9b90 STEP: Creating secret with name s-test-opt-create-1513a091-e53c-488a-9f68-b029d3358c09 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 00:53:21.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4227" for this suite. • [SLOW TEST:96.728 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":275,"skipped":4685,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSApr 8 00:53:21.139: INFO: Running AfterSuite actions on all nodes Apr 8 00:53:21.139: INFO: Running AfterSuite actions on node 1 Apr 8 00:53:21.139: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":275,"completed":275,"skipped":4717,"failed":0} Ran 275 of 4992 Specs in 4546.163 seconds SUCCESS! -- 275 Passed | 0 Failed | 0 Pending | 4717 Skipped PASS