I0401 23:36:54.211571 7 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0401 23:36:54.211721 7 e2e.go:124] Starting e2e run "d5ae8c2d-0969-4d03-a25c-96aa92f0f517" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1585784213 - Will randomize all specs Will run 275 of 4992 specs Apr 1 23:36:54.263: INFO: >>> kubeConfig: /root/.kube/config Apr 1 23:36:54.269: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 1 23:36:54.293: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 1 23:36:54.329: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 1 23:36:54.329: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 1 23:36:54.329: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 1 23:36:54.339: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 1 23:36:54.339: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 1 23:36:54.339: INFO: e2e test version: v1.19.0-alpha.0.779+84dc7046797aad Apr 1 23:36:54.341: INFO: kube-apiserver version: v1.17.0 Apr 1 23:36:54.341: INFO: >>> kubeConfig: /root/.kube/config Apr 1 23:36:54.350: INFO: Cluster IP family: ipv4 SSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:36:54.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test Apr 1 23:36:54.422: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 1 23:36:54.430: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-adea4b44-41d8-4a59-a0cd-149ec176cecb" in namespace "security-context-test-1609" to be "Succeeded or Failed" Apr 1 23:36:54.435: INFO: Pod "busybox-privileged-false-adea4b44-41d8-4a59-a0cd-149ec176cecb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.972464ms Apr 1 23:36:56.440: INFO: Pod "busybox-privileged-false-adea4b44-41d8-4a59-a0cd-149ec176cecb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009654643s Apr 1 23:36:58.444: INFO: Pod "busybox-privileged-false-adea4b44-41d8-4a59-a0cd-149ec176cecb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014063257s Apr 1 23:36:58.444: INFO: Pod "busybox-privileged-false-adea4b44-41d8-4a59-a0cd-149ec176cecb" satisfied condition "Succeeded or Failed" Apr 1 23:36:58.466: INFO: Got logs for pod "busybox-privileged-false-adea4b44-41d8-4a59-a0cd-149ec176cecb": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:36:58.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1609" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":1,"skipped":6,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:36:58.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-9919, will wait for the garbage collector to delete the pods Apr 1 23:37:04.594: INFO: Deleting Job.batch foo took: 6.274594ms Apr 1 23:37:04.895: INFO: Terminating Job.batch foo pods took: 300.251213ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:37:42.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9919" for this suite. • [SLOW TEST:44.532 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":2,"skipped":24,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:37:43.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-1d451a1c-65a4-4bbb-91a4-29b9a2a1caa1 STEP: Creating a pod to test consume configMaps Apr 1 23:37:43.090: INFO: Waiting up to 5m0s for pod "pod-configmaps-c0b40f9e-f480-4755-a4bb-93846ad88f0e" in namespace "configmap-7660" to be "Succeeded or Failed" Apr 1 23:37:43.104: INFO: Pod "pod-configmaps-c0b40f9e-f480-4755-a4bb-93846ad88f0e": Phase="Pending", Reason="", readiness=false. Elapsed: 13.80598ms Apr 1 23:37:45.108: INFO: Pod "pod-configmaps-c0b40f9e-f480-4755-a4bb-93846ad88f0e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017997748s Apr 1 23:37:47.113: INFO: Pod "pod-configmaps-c0b40f9e-f480-4755-a4bb-93846ad88f0e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022108975s STEP: Saw pod success Apr 1 23:37:47.113: INFO: Pod "pod-configmaps-c0b40f9e-f480-4755-a4bb-93846ad88f0e" satisfied condition "Succeeded or Failed" Apr 1 23:37:47.116: INFO: Trying to get logs from node latest-worker pod pod-configmaps-c0b40f9e-f480-4755-a4bb-93846ad88f0e container configmap-volume-test: STEP: delete the pod Apr 1 23:37:47.210: INFO: Waiting for pod pod-configmaps-c0b40f9e-f480-4755-a4bb-93846ad88f0e to disappear Apr 1 23:37:47.219: INFO: Pod pod-configmaps-c0b40f9e-f480-4755-a4bb-93846ad88f0e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:37:47.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7660" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":3,"skipped":38,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:37:47.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 1 23:37:47.268: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Apr 1 23:37:50.226: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5187 create -f -' Apr 1 23:37:52.816: INFO: stderr: "" Apr 1 23:37:52.816: INFO: stdout: "e2e-test-crd-publish-openapi-7284-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 1 23:37:52.816: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5187 delete e2e-test-crd-publish-openapi-7284-crds test-foo' Apr 1 23:37:52.926: INFO: stderr: "" Apr 1 23:37:52.926: INFO: stdout: "e2e-test-crd-publish-openapi-7284-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Apr 1 23:37:52.926: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5187 apply -f -' Apr 1 23:37:53.169: INFO: stderr: "" Apr 1 23:37:53.169: INFO: stdout: "e2e-test-crd-publish-openapi-7284-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 1 23:37:53.169: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5187 delete e2e-test-crd-publish-openapi-7284-crds test-foo' Apr 1 23:37:53.264: INFO: stderr: "" Apr 1 23:37:53.264: INFO: stdout: "e2e-test-crd-publish-openapi-7284-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Apr 1 23:37:53.264: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5187 create -f -' Apr 1 23:37:53.482: INFO: rc: 1 Apr 1 23:37:53.482: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5187 apply -f -' Apr 1 23:37:53.723: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Apr 1 23:37:53.723: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5187 create -f -' Apr 1 23:37:53.954: INFO: rc: 1 Apr 1 23:37:53.954: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5187 apply -f -' Apr 1 23:37:54.176: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Apr 1 23:37:54.177: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7284-crds' Apr 1 23:37:54.423: INFO: stderr: "" Apr 1 23:37:54.423: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7284-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Apr 1 23:37:54.424: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7284-crds.metadata' Apr 1 23:37:54.656: INFO: stderr: "" Apr 1 23:37:54.656: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7284-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Apr 1 23:37:54.657: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7284-crds.spec' Apr 1 23:37:54.923: INFO: stderr: "" Apr 1 23:37:54.923: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7284-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Apr 1 23:37:54.923: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7284-crds.spec.bars' Apr 1 23:37:55.155: INFO: stderr: "" Apr 1 23:37:55.155: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7284-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Apr 1 23:37:55.155: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7284-crds.spec.bars2' Apr 1 23:37:55.390: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:37:57.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5187" for this suite. • [SLOW TEST:10.093 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":4,"skipped":52,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:37:57.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 1 23:38:00.421: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:38:00.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9872" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":5,"skipped":150,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:38:00.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 1 23:38:00.497: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config version' Apr 1 23:38:00.745: INFO: stderr: "" Apr 1 23:38:00.745: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.0.779+84dc7046797aad\", GitCommit:\"84dc7046797aad80f258b6740a98e79199c8bb4d\", GitTreeState:\"clean\", BuildDate:\"2020-03-15T16:56:42Z\", GoVersion:\"go1.13.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:09:19Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:38:00.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9294" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":275,"completed":6,"skipped":196,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:38:00.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-downwardapi-7dtn STEP: Creating a pod to test atomic-volume-subpath Apr 1 23:38:00.858: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-7dtn" in namespace "subpath-4670" to be "Succeeded or Failed" Apr 1 23:38:00.875: INFO: Pod "pod-subpath-test-downwardapi-7dtn": Phase="Pending", Reason="", readiness=false. Elapsed: 17.688786ms Apr 1 23:38:02.881: INFO: Pod "pod-subpath-test-downwardapi-7dtn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022940858s Apr 1 23:38:04.884: INFO: Pod "pod-subpath-test-downwardapi-7dtn": Phase="Running", Reason="", readiness=true. Elapsed: 4.026791092s Apr 1 23:38:06.888: INFO: Pod "pod-subpath-test-downwardapi-7dtn": Phase="Running", Reason="", readiness=true. Elapsed: 6.030500389s Apr 1 23:38:08.892: INFO: Pod "pod-subpath-test-downwardapi-7dtn": Phase="Running", Reason="", readiness=true. Elapsed: 8.034201625s Apr 1 23:38:10.896: INFO: Pod "pod-subpath-test-downwardapi-7dtn": Phase="Running", Reason="", readiness=true. Elapsed: 10.038325275s Apr 1 23:38:12.915: INFO: Pod "pod-subpath-test-downwardapi-7dtn": Phase="Running", Reason="", readiness=true. Elapsed: 12.057231719s Apr 1 23:38:14.933: INFO: Pod "pod-subpath-test-downwardapi-7dtn": Phase="Running", Reason="", readiness=true. Elapsed: 14.075515678s Apr 1 23:38:16.937: INFO: Pod "pod-subpath-test-downwardapi-7dtn": Phase="Running", Reason="", readiness=true. Elapsed: 16.079407811s Apr 1 23:38:18.943: INFO: Pod "pod-subpath-test-downwardapi-7dtn": Phase="Running", Reason="", readiness=true. Elapsed: 18.085124483s Apr 1 23:38:20.947: INFO: Pod "pod-subpath-test-downwardapi-7dtn": Phase="Running", Reason="", readiness=true. Elapsed: 20.089249813s Apr 1 23:38:22.951: INFO: Pod "pod-subpath-test-downwardapi-7dtn": Phase="Running", Reason="", readiness=true. Elapsed: 22.093552056s Apr 1 23:38:24.956: INFO: Pod "pod-subpath-test-downwardapi-7dtn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.098195642s STEP: Saw pod success Apr 1 23:38:24.956: INFO: Pod "pod-subpath-test-downwardapi-7dtn" satisfied condition "Succeeded or Failed" Apr 1 23:38:24.959: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-downwardapi-7dtn container test-container-subpath-downwardapi-7dtn: STEP: delete the pod Apr 1 23:38:25.025: INFO: Waiting for pod pod-subpath-test-downwardapi-7dtn to disappear Apr 1 23:38:25.028: INFO: Pod pod-subpath-test-downwardapi-7dtn no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-7dtn Apr 1 23:38:25.028: INFO: Deleting pod "pod-subpath-test-downwardapi-7dtn" in namespace "subpath-4670" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:38:25.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4670" for this suite. • [SLOW TEST:24.259 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":7,"skipped":204,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:38:25.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:38:31.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7033" for this suite. STEP: Destroying namespace "nsdeletetest-2525" for this suite. Apr 1 23:38:31.329: INFO: Namespace nsdeletetest-2525 was already deleted STEP: Destroying namespace "nsdeletetest-3809" for this suite. • [SLOW TEST:6.295 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":8,"skipped":210,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:38:31.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 1 23:38:32.018: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 1 23:38:34.027: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381112, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381112, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381112, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381112, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 1 23:38:37.056: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:38:37.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4423" for this suite. STEP: Destroying namespace "webhook-4423-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.087 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":9,"skipped":239,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:38:37.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-75243006-9828-42d4-a26f-936f3f238d13 in namespace container-probe-2029 Apr 1 23:38:41.493: INFO: Started pod liveness-75243006-9828-42d4-a26f-936f3f238d13 in namespace container-probe-2029 STEP: checking the pod's current state and verifying that restartCount is present Apr 1 23:38:41.495: INFO: Initial restart count of pod liveness-75243006-9828-42d4-a26f-936f3f238d13 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:42:42.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2029" for this suite. • [SLOW TEST:244.676 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":10,"skipped":309,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:42:42.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-a9cfafef-d4a5-4282-900c-6d7c03ff6e71 STEP: Creating a pod to test consume secrets Apr 1 23:42:42.426: INFO: Waiting up to 5m0s for pod "pod-secrets-9da151f0-231b-4932-9a44-684970017df1" in namespace "secrets-9691" to be "Succeeded or Failed" Apr 1 23:42:42.447: INFO: Pod "pod-secrets-9da151f0-231b-4932-9a44-684970017df1": Phase="Pending", Reason="", readiness=false. Elapsed: 21.307287ms Apr 1 23:42:44.451: INFO: Pod "pod-secrets-9da151f0-231b-4932-9a44-684970017df1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024908326s Apr 1 23:42:46.455: INFO: Pod "pod-secrets-9da151f0-231b-4932-9a44-684970017df1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029332133s STEP: Saw pod success Apr 1 23:42:46.455: INFO: Pod "pod-secrets-9da151f0-231b-4932-9a44-684970017df1" satisfied condition "Succeeded or Failed" Apr 1 23:42:46.458: INFO: Trying to get logs from node latest-worker pod pod-secrets-9da151f0-231b-4932-9a44-684970017df1 container secret-volume-test: STEP: delete the pod Apr 1 23:42:46.491: INFO: Waiting for pod pod-secrets-9da151f0-231b-4932-9a44-684970017df1 to disappear Apr 1 23:42:46.494: INFO: Pod pod-secrets-9da151f0-231b-4932-9a44-684970017df1 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:42:46.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9691" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":11,"skipped":312,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:42:46.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on node default medium Apr 1 23:42:46.619: INFO: Waiting up to 5m0s for pod "pod-d9d56020-8e90-46b2-977d-08352674eb61" in namespace "emptydir-9065" to be "Succeeded or Failed" Apr 1 23:42:46.639: INFO: Pod "pod-d9d56020-8e90-46b2-977d-08352674eb61": Phase="Pending", Reason="", readiness=false. Elapsed: 19.451241ms Apr 1 23:42:48.643: INFO: Pod "pod-d9d56020-8e90-46b2-977d-08352674eb61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023527291s Apr 1 23:42:50.647: INFO: Pod "pod-d9d56020-8e90-46b2-977d-08352674eb61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027411038s STEP: Saw pod success Apr 1 23:42:50.647: INFO: Pod "pod-d9d56020-8e90-46b2-977d-08352674eb61" satisfied condition "Succeeded or Failed" Apr 1 23:42:50.650: INFO: Trying to get logs from node latest-worker2 pod pod-d9d56020-8e90-46b2-977d-08352674eb61 container test-container: STEP: delete the pod Apr 1 23:42:50.677: INFO: Waiting for pod pod-d9d56020-8e90-46b2-977d-08352674eb61 to disappear Apr 1 23:42:50.680: INFO: Pod pod-d9d56020-8e90-46b2-977d-08352674eb61 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:42:50.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9065" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":12,"skipped":329,"failed":0} ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:42:50.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 1 23:42:50.831: INFO: (0) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 5.888798ms) Apr 1 23:42:50.854: INFO: (1) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 21.991006ms) Apr 1 23:42:50.858: INFO: (2) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 4.27903ms) Apr 1 23:42:50.861: INFO: (3) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.347727ms) Apr 1 23:42:50.865: INFO: (4) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 4.056869ms) Apr 1 23:42:50.869: INFO: (5) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.82024ms) Apr 1 23:42:50.873: INFO: (6) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.367861ms) Apr 1 23:42:50.876: INFO: (7) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.204223ms) Apr 1 23:42:50.880: INFO: (8) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.565976ms) Apr 1 23:42:50.883: INFO: (9) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.396958ms) Apr 1 23:42:50.887: INFO: (10) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.538077ms) Apr 1 23:42:50.890: INFO: (11) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.595066ms) Apr 1 23:42:50.894: INFO: (12) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 4.024266ms) Apr 1 23:42:50.898: INFO: (13) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.291021ms) Apr 1 23:42:50.901: INFO: (14) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.206751ms) Apr 1 23:42:50.905: INFO: (15) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.653441ms) Apr 1 23:42:50.908: INFO: (16) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.315373ms) Apr 1 23:42:50.911: INFO: (17) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.348406ms) Apr 1 23:42:50.915: INFO: (18) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.460696ms) Apr 1 23:42:50.918: INFO: (19) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.429565ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:42:50.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-6637" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":275,"completed":13,"skipped":329,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:42:50.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting the proxy server Apr 1 23:42:51.006: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:42:51.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6243" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":275,"completed":14,"skipped":337,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:42:51.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-24d6a537-7bc5-4fcb-bd18-34071b36b03f in namespace container-probe-9384 Apr 1 23:42:55.187: INFO: Started pod busybox-24d6a537-7bc5-4fcb-bd18-34071b36b03f in namespace container-probe-9384 STEP: checking the pod's current state and verifying that restartCount is present Apr 1 23:42:55.189: INFO: Initial restart count of pod busybox-24d6a537-7bc5-4fcb-bd18-34071b36b03f is 0 Apr 1 23:43:49.393: INFO: Restart count of pod container-probe-9384/busybox-24d6a537-7bc5-4fcb-bd18-34071b36b03f is now 1 (54.203399474s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:43:49.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9384" for this suite. • [SLOW TEST:58.325 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":15,"skipped":363,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:43:49.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-6prk STEP: Creating a pod to test atomic-volume-subpath Apr 1 23:43:49.510: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-6prk" in namespace "subpath-6665" to be "Succeeded or Failed" Apr 1 23:43:49.514: INFO: Pod "pod-subpath-test-configmap-6prk": Phase="Pending", Reason="", readiness=false. Elapsed: 3.603394ms Apr 1 23:43:51.517: INFO: Pod "pod-subpath-test-configmap-6prk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006907059s Apr 1 23:43:53.520: INFO: Pod "pod-subpath-test-configmap-6prk": Phase="Running", Reason="", readiness=true. Elapsed: 4.010087729s Apr 1 23:43:55.524: INFO: Pod "pod-subpath-test-configmap-6prk": Phase="Running", Reason="", readiness=true. Elapsed: 6.013956312s Apr 1 23:43:57.529: INFO: Pod "pod-subpath-test-configmap-6prk": Phase="Running", Reason="", readiness=true. Elapsed: 8.018315254s Apr 1 23:43:59.533: INFO: Pod "pod-subpath-test-configmap-6prk": Phase="Running", Reason="", readiness=true. Elapsed: 10.022746247s Apr 1 23:44:01.537: INFO: Pod "pod-subpath-test-configmap-6prk": Phase="Running", Reason="", readiness=true. Elapsed: 12.027063797s Apr 1 23:44:03.542: INFO: Pod "pod-subpath-test-configmap-6prk": Phase="Running", Reason="", readiness=true. Elapsed: 14.031346563s Apr 1 23:44:05.546: INFO: Pod "pod-subpath-test-configmap-6prk": Phase="Running", Reason="", readiness=true. Elapsed: 16.035394026s Apr 1 23:44:07.550: INFO: Pod "pod-subpath-test-configmap-6prk": Phase="Running", Reason="", readiness=true. Elapsed: 18.039874668s Apr 1 23:44:09.555: INFO: Pod "pod-subpath-test-configmap-6prk": Phase="Running", Reason="", readiness=true. Elapsed: 20.044335116s Apr 1 23:44:11.559: INFO: Pod "pod-subpath-test-configmap-6prk": Phase="Running", Reason="", readiness=true. Elapsed: 22.048334868s Apr 1 23:44:13.563: INFO: Pod "pod-subpath-test-configmap-6prk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.052272039s STEP: Saw pod success Apr 1 23:44:13.563: INFO: Pod "pod-subpath-test-configmap-6prk" satisfied condition "Succeeded or Failed" Apr 1 23:44:13.566: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-6prk container test-container-subpath-configmap-6prk: STEP: delete the pod Apr 1 23:44:13.620: INFO: Waiting for pod pod-subpath-test-configmap-6prk to disappear Apr 1 23:44:13.627: INFO: Pod pod-subpath-test-configmap-6prk no longer exists STEP: Deleting pod pod-subpath-test-configmap-6prk Apr 1 23:44:13.627: INFO: Deleting pod "pod-subpath-test-configmap-6prk" in namespace "subpath-6665" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:44:13.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6665" for this suite. • [SLOW TEST:24.198 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":16,"skipped":366,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:44:13.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 1 23:44:14.201: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 1 23:44:16.222: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381454, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381454, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381454, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381454, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 1 23:44:19.251: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:44:29.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1513" for this suite. STEP: Destroying namespace "webhook-1513-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.886 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":17,"skipped":371,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:44:29.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-8307/configmap-test-5fcbcee0-e36b-4d56-8662-2ea50bb68ef0 STEP: Creating a pod to test consume configMaps Apr 1 23:44:29.584: INFO: Waiting up to 5m0s for pod "pod-configmaps-33865b05-7053-4a8d-bd5e-6e44f75c8d66" in namespace "configmap-8307" to be "Succeeded or Failed" Apr 1 23:44:29.586: INFO: Pod "pod-configmaps-33865b05-7053-4a8d-bd5e-6e44f75c8d66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.538508ms Apr 1 23:44:31.592: INFO: Pod "pod-configmaps-33865b05-7053-4a8d-bd5e-6e44f75c8d66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008084607s Apr 1 23:44:33.597: INFO: Pod "pod-configmaps-33865b05-7053-4a8d-bd5e-6e44f75c8d66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012693347s STEP: Saw pod success Apr 1 23:44:33.597: INFO: Pod "pod-configmaps-33865b05-7053-4a8d-bd5e-6e44f75c8d66" satisfied condition "Succeeded or Failed" Apr 1 23:44:33.600: INFO: Trying to get logs from node latest-worker pod pod-configmaps-33865b05-7053-4a8d-bd5e-6e44f75c8d66 container env-test: STEP: delete the pod Apr 1 23:44:33.617: INFO: Waiting for pod pod-configmaps-33865b05-7053-4a8d-bd5e-6e44f75c8d66 to disappear Apr 1 23:44:33.622: INFO: Pod pod-configmaps-33865b05-7053-4a8d-bd5e-6e44f75c8d66 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:44:33.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8307" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":18,"skipped":387,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:44:33.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 1 23:44:34.086: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 1 23:44:36.097: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381474, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381474, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381474, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381474, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 1 23:44:39.142: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 1 23:44:39.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5645-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:44:40.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4398" for this suite. STEP: Destroying namespace "webhook-4398-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.688 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":19,"skipped":390,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:44:40.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 1 23:44:40.546: INFO: Waiting up to 5m0s for pod "pod-e2e2f1a9-65f0-4666-bfe7-62dcb4e54a37" in namespace "emptydir-7910" to be "Succeeded or Failed" Apr 1 23:44:40.616: INFO: Pod "pod-e2e2f1a9-65f0-4666-bfe7-62dcb4e54a37": Phase="Pending", Reason="", readiness=false. Elapsed: 70.042001ms Apr 1 23:44:42.620: INFO: Pod "pod-e2e2f1a9-65f0-4666-bfe7-62dcb4e54a37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073335963s Apr 1 23:44:44.624: INFO: Pod "pod-e2e2f1a9-65f0-4666-bfe7-62dcb4e54a37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077767065s STEP: Saw pod success Apr 1 23:44:44.624: INFO: Pod "pod-e2e2f1a9-65f0-4666-bfe7-62dcb4e54a37" satisfied condition "Succeeded or Failed" Apr 1 23:44:44.627: INFO: Trying to get logs from node latest-worker2 pod pod-e2e2f1a9-65f0-4666-bfe7-62dcb4e54a37 container test-container: STEP: delete the pod Apr 1 23:44:44.669: INFO: Waiting for pod pod-e2e2f1a9-65f0-4666-bfe7-62dcb4e54a37 to disappear Apr 1 23:44:44.682: INFO: Pod pod-e2e2f1a9-65f0-4666-bfe7-62dcb4e54a37 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:44:44.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7910" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":20,"skipped":400,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:44:44.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-3f639feb-2431-4597-b628-1f44f0501680 STEP: Creating a pod to test consume secrets Apr 1 23:44:44.880: INFO: Waiting up to 5m0s for pod "pod-secrets-8c0e6270-4436-45e0-bfef-a242870e9b5a" in namespace "secrets-8374" to be "Succeeded or Failed" Apr 1 23:44:44.902: INFO: Pod "pod-secrets-8c0e6270-4436-45e0-bfef-a242870e9b5a": Phase="Pending", Reason="", readiness=false. Elapsed: 21.469725ms Apr 1 23:44:46.905: INFO: Pod "pod-secrets-8c0e6270-4436-45e0-bfef-a242870e9b5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024712132s Apr 1 23:44:48.909: INFO: Pod "pod-secrets-8c0e6270-4436-45e0-bfef-a242870e9b5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029188977s STEP: Saw pod success Apr 1 23:44:48.909: INFO: Pod "pod-secrets-8c0e6270-4436-45e0-bfef-a242870e9b5a" satisfied condition "Succeeded or Failed" Apr 1 23:44:48.915: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-8c0e6270-4436-45e0-bfef-a242870e9b5a container secret-volume-test: STEP: delete the pod Apr 1 23:44:48.998: INFO: Waiting for pod pod-secrets-8c0e6270-4436-45e0-bfef-a242870e9b5a to disappear Apr 1 23:44:49.002: INFO: Pod pod-secrets-8c0e6270-4436-45e0-bfef-a242870e9b5a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:44:49.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8374" for this suite. STEP: Destroying namespace "secret-namespace-8827" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":21,"skipped":404,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:44:49.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 1 23:44:49.794: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 1 23:44:51.805: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381489, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381489, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381489, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381489, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 1 23:44:54.834: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Apr 1 23:44:58.928: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config attach --namespace=webhook-3625 to-be-attached-pod -i -c=container1' Apr 1 23:44:59.045: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:44:59.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3625" for this suite. STEP: Destroying namespace "webhook-3625-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.123 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":22,"skipped":430,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:44:59.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 1 23:45:00.188: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 1 23:45:02.211: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381500, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381500, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381500, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381500, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 1 23:45:05.247: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Apr 1 23:45:05.268: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:45:05.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8435" for this suite. STEP: Destroying namespace "webhook-8435-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.244 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":23,"skipped":466,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:45:05.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 1 23:45:05.459: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:45:06.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9860" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":275,"completed":24,"skipped":481,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:45:06.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 1 23:45:06.606: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 1 23:45:06.624: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 1 23:45:11.629: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 1 23:45:11.629: INFO: Creating deployment "test-rolling-update-deployment" Apr 1 23:45:11.633: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 1 23:45:11.637: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 1 23:45:13.645: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 1 23:45:13.648: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381511, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381511, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381511, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381511, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-664dd8fc7f\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 1 23:45:15.652: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 1 23:45:15.663: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-4908 /apis/apps/v1/namespaces/deployment-4908/deployments/test-rolling-update-deployment 73293bee-4dd2-4931-bd1d-0d2ea438c094 4658415 1 2020-04-01 23:45:11 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002fef118 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-01 23:45:11 +0000 UTC,LastTransitionTime:2020-04-01 23:45:11 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-664dd8fc7f" has successfully progressed.,LastUpdateTime:2020-04-01 23:45:14 +0000 UTC,LastTransitionTime:2020-04-01 23:45:11 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 1 23:45:15.666: INFO: New ReplicaSet "test-rolling-update-deployment-664dd8fc7f" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f deployment-4908 /apis/apps/v1/namespaces/deployment-4908/replicasets/test-rolling-update-deployment-664dd8fc7f a91508e6-4493-4361-afe0-fa95f97c590c 4658404 1 2020-04-01 23:45:11 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 73293bee-4dd2-4931-bd1d-0d2ea438c094 0xc002fef637 0xc002fef638}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 664dd8fc7f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002fef6b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 1 23:45:15.666: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 1 23:45:15.666: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-4908 /apis/apps/v1/namespaces/deployment-4908/replicasets/test-rolling-update-controller ed051859-7b0c-4f88-8ade-690525b50d59 4658413 2 2020-04-01 23:45:06 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 73293bee-4dd2-4931-bd1d-0d2ea438c094 0xc002fef567 0xc002fef568}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002fef5c8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 1 23:45:15.670: INFO: Pod "test-rolling-update-deployment-664dd8fc7f-l5gs9" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f-l5gs9 test-rolling-update-deployment-664dd8fc7f- deployment-4908 /api/v1/namespaces/deployment-4908/pods/test-rolling-update-deployment-664dd8fc7f-l5gs9 a2de7020-c933-4414-a793-254df7a1ccab 4658403 0 2020-04-01 23:45:11 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-664dd8fc7f a91508e6-4493-4361-afe0-fa95f97c590c 0xc002fefb87 0xc002fefb88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6pztc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6pztc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6pztc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-01 23:45:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-01 23:45:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-01 23:45:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-01 23:45:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.18,StartTime:2020-04-01 23:45:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-01 23:45:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://d1bf6ed3b06c431374193371aecfc06c69fc4111052d4de568ab8da6f197889b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.18,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:45:15.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4908" for this suite. • [SLOW TEST:9.139 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":25,"skipped":491,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:45:15.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating cluster-info Apr 1 23:45:15.728: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config cluster-info' Apr 1 23:45:15.824: INFO: stderr: "" Apr 1 23:45:15.824: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:45:15.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6728" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":275,"completed":26,"skipped":492,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:45:15.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 1 23:45:16.548: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 1 23:45:18.556: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381516, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381516, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381516, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381516, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 1 23:45:21.587: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 1 23:45:21.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8127-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:45:22.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2268" for this suite. STEP: Destroying namespace "webhook-2268-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.159 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":27,"skipped":508,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:45:22.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:45:27.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9848" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":28,"skipped":528,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:45:27.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:45:28.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-2136" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":29,"skipped":543,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:45:28.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288 STEP: creating an pod Apr 1 23:45:28.266: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-9072 -- logs-generator --log-lines-total 100 --run-duration 20s' Apr 1 23:45:28.375: INFO: stderr: "" Apr 1 23:45:28.375: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Waiting for log generator to start. Apr 1 23:45:28.375: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Apr 1 23:45:28.375: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-9072" to be "running and ready, or succeeded" Apr 1 23:45:28.472: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 96.63815ms Apr 1 23:45:30.477: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101741155s Apr 1 23:45:32.481: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.106006273s Apr 1 23:45:32.481: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Apr 1 23:45:32.481: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Apr 1 23:45:32.482: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9072' Apr 1 23:45:32.608: INFO: stderr: "" Apr 1 23:45:32.608: INFO: stdout: "I0401 23:45:30.681913 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/xxmt 291\nI0401 23:45:30.882021 1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/fxw 542\nI0401 23:45:31.082104 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/d9r 317\nI0401 23:45:31.282204 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/kj4 395\nI0401 23:45:31.482113 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/b2gf 305\nI0401 23:45:31.682146 1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/5wd 261\nI0401 23:45:31.882122 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/l7wh 272\nI0401 23:45:32.082063 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/ksn 291\nI0401 23:45:32.282083 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/dkxt 244\nI0401 23:45:32.482113 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/q56s 468\n" STEP: limiting log lines Apr 1 23:45:32.608: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9072 --tail=1' Apr 1 23:45:32.723: INFO: stderr: "" Apr 1 23:45:32.723: INFO: stdout: "I0401 23:45:32.682148 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/wgrs 578\n" Apr 1 23:45:32.723: INFO: got output "I0401 23:45:32.682148 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/wgrs 578\n" STEP: limiting log bytes Apr 1 23:45:32.723: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9072 --limit-bytes=1' Apr 1 23:45:32.836: INFO: stderr: "" Apr 1 23:45:32.836: INFO: stdout: "I" Apr 1 23:45:32.836: INFO: got output "I" STEP: exposing timestamps Apr 1 23:45:32.836: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9072 --tail=1 --timestamps' Apr 1 23:45:32.948: INFO: stderr: "" Apr 1 23:45:32.948: INFO: stdout: "2020-04-01T23:45:32.882224786Z I0401 23:45:32.882051 1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/bd4l 356\n" Apr 1 23:45:32.948: INFO: got output "2020-04-01T23:45:32.882224786Z I0401 23:45:32.882051 1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/bd4l 356\n" STEP: restricting to a time range Apr 1 23:45:35.448: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9072 --since=1s' Apr 1 23:45:35.562: INFO: stderr: "" Apr 1 23:45:35.562: INFO: stdout: "I0401 23:45:34.682123 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/mqzv 263\nI0401 23:45:34.882087 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/8lfg 550\nI0401 23:45:35.082060 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/k7r 336\nI0401 23:45:35.282118 1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/rnk 537\nI0401 23:45:35.482063 1 logs_generator.go:76] 24 POST /api/v1/namespaces/ns/pods/9lq 369\n" Apr 1 23:45:35.562: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9072 --since=24h' Apr 1 23:45:35.677: INFO: stderr: "" Apr 1 23:45:35.677: INFO: stdout: "I0401 23:45:30.681913 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/xxmt 291\nI0401 23:45:30.882021 1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/fxw 542\nI0401 23:45:31.082104 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/d9r 317\nI0401 23:45:31.282204 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/kj4 395\nI0401 23:45:31.482113 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/b2gf 305\nI0401 23:45:31.682146 1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/5wd 261\nI0401 23:45:31.882122 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/l7wh 272\nI0401 23:45:32.082063 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/ksn 291\nI0401 23:45:32.282083 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/dkxt 244\nI0401 23:45:32.482113 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/q56s 468\nI0401 23:45:32.682148 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/wgrs 578\nI0401 23:45:32.882051 1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/bd4l 356\nI0401 23:45:33.082051 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/kube-system/pods/c6p 574\nI0401 23:45:33.282095 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/ljc 235\nI0401 23:45:33.482074 1 logs_generator.go:76] 14 POST /api/v1/namespaces/kube-system/pods/zxpg 259\nI0401 23:45:33.682079 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/kube-system/pods/jpbn 263\nI0401 23:45:33.882045 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/gsc 267\nI0401 23:45:34.082057 1 logs_generator.go:76] 17 POST /api/v1/namespaces/kube-system/pods/q5mb 418\nI0401 23:45:34.282072 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/pdn 432\nI0401 23:45:34.482056 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/kube-system/pods/rxk 517\nI0401 23:45:34.682123 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/mqzv 263\nI0401 23:45:34.882087 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/8lfg 550\nI0401 23:45:35.082060 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/k7r 336\nI0401 23:45:35.282118 1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/rnk 537\nI0401 23:45:35.482063 1 logs_generator.go:76] 24 POST /api/v1/namespaces/ns/pods/9lq 369\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294 Apr 1 23:45:35.677: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-9072' Apr 1 23:45:42.761: INFO: stderr: "" Apr 1 23:45:42.761: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:45:42.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9072" for this suite. • [SLOW TEST:14.600 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":275,"completed":30,"skipped":544,"failed":0} [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:45:42.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:45:46.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4892" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":31,"skipped":544,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:45:46.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-0fa3be44-a8df-4a13-b566-fe0857212e73 STEP: Creating a pod to test consume secrets Apr 1 23:45:46.956: INFO: Waiting up to 5m0s for pod "pod-secrets-82c73a72-c192-45ef-9991-2aa6e29f6cbe" in namespace "secrets-5565" to be "Succeeded or Failed" Apr 1 23:45:46.959: INFO: Pod "pod-secrets-82c73a72-c192-45ef-9991-2aa6e29f6cbe": Phase="Pending", Reason="", readiness=false. Elapsed: 3.618394ms Apr 1 23:45:48.963: INFO: Pod "pod-secrets-82c73a72-c192-45ef-9991-2aa6e29f6cbe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007436703s Apr 1 23:45:50.967: INFO: Pod "pod-secrets-82c73a72-c192-45ef-9991-2aa6e29f6cbe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011516505s STEP: Saw pod success Apr 1 23:45:50.967: INFO: Pod "pod-secrets-82c73a72-c192-45ef-9991-2aa6e29f6cbe" satisfied condition "Succeeded or Failed" Apr 1 23:45:50.970: INFO: Trying to get logs from node latest-worker pod pod-secrets-82c73a72-c192-45ef-9991-2aa6e29f6cbe container secret-env-test: STEP: delete the pod Apr 1 23:45:50.984: INFO: Waiting for pod pod-secrets-82c73a72-c192-45ef-9991-2aa6e29f6cbe to disappear Apr 1 23:45:50.989: INFO: Pod pod-secrets-82c73a72-c192-45ef-9991-2aa6e29f6cbe no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:45:50.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5565" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":32,"skipped":554,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:45:51.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 1 23:45:51.316: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 1 23:45:56.322: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 1 23:45:56.322: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 1 23:45:58.327: INFO: Creating deployment "test-rollover-deployment" Apr 1 23:45:58.339: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 1 23:46:00.345: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 1 23:46:00.350: INFO: Ensure that both replica sets have 1 created replica Apr 1 23:46:00.356: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 1 23:46:00.362: INFO: Updating deployment test-rollover-deployment Apr 1 23:46:00.362: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 1 23:46:02.379: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 1 23:46:02.384: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 1 23:46:02.389: INFO: all replica sets need to contain the pod-template-hash label Apr 1 23:46:02.390: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381558, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381558, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381560, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381558, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 1 23:46:04.397: INFO: all replica sets need to contain the pod-template-hash label Apr 1 23:46:04.398: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381558, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381558, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381563, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381558, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 1 23:46:06.397: INFO: all replica sets need to contain the pod-template-hash label Apr 1 23:46:06.397: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381558, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381558, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381563, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381558, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 1 23:46:08.397: INFO: all replica sets need to contain the pod-template-hash label Apr 1 23:46:08.397: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381558, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381558, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381563, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381558, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 1 23:46:10.408: INFO: all replica sets need to contain the pod-template-hash label Apr 1 23:46:10.408: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381558, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381558, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381563, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381558, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 1 23:46:12.397: INFO: all replica sets need to contain the pod-template-hash label Apr 1 23:46:12.397: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381558, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381558, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381563, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381558, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 1 23:46:14.465: INFO: Apr 1 23:46:14.465: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 1 23:46:14.473: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-2047 /apis/apps/v1/namespaces/deployment-2047/deployments/test-rollover-deployment 55361d7d-2da2-46ed-bd26-d7ecacb549d2 4658976 2 2020-04-01 23:45:58 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00232f1c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-01 23:45:58 +0000 UTC,LastTransitionTime:2020-04-01 23:45:58 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-78df7bc796" has successfully progressed.,LastUpdateTime:2020-04-01 23:46:13 +0000 UTC,LastTransitionTime:2020-04-01 23:45:58 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 1 23:46:14.476: INFO: New ReplicaSet "test-rollover-deployment-78df7bc796" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-78df7bc796 deployment-2047 /apis/apps/v1/namespaces/deployment-2047/replicasets/test-rollover-deployment-78df7bc796 2fb1aaef-0339-4584-9e92-6608774704d7 4658965 2 2020-04-01 23:46:00 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 55361d7d-2da2-46ed-bd26-d7ecacb549d2 0xc0022abe17 0xc0022abe18}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78df7bc796,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0022abe88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 1 23:46:14.476: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 1 23:46:14.476: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-2047 /apis/apps/v1/namespaces/deployment-2047/replicasets/test-rollover-controller 66d50401-675d-4ae8-b71d-71894e3d2d6c 4658974 2 2020-04-01 23:45:51 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 55361d7d-2da2-46ed-bd26-d7ecacb549d2 0xc0022abd47 0xc0022abd48}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0022abda8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 1 23:46:14.476: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-2047 /apis/apps/v1/namespaces/deployment-2047/replicasets/test-rollover-deployment-f6c94f66c 92a2a683-9f7a-45ef-9914-d37d0cd30a58 4658919 2 2020-04-01 23:45:58 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 55361d7d-2da2-46ed-bd26-d7ecacb549d2 0xc0022abef0 0xc0022abef1}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0022abf68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 1 23:46:14.479: INFO: Pod "test-rollover-deployment-78df7bc796-wzndb" is available: &Pod{ObjectMeta:{test-rollover-deployment-78df7bc796-wzndb test-rollover-deployment-78df7bc796- deployment-2047 /api/v1/namespaces/deployment-2047/pods/test-rollover-deployment-78df7bc796-wzndb e5b93e59-b45c-4c15-b684-9457ba9f0006 4658933 0 2020-04-01 23:46:00 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [{apps/v1 ReplicaSet test-rollover-deployment-78df7bc796 2fb1aaef-0339-4584-9e92-6608774704d7 0xc002376c37 0xc002376c38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p9px8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p9px8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p9px8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-01 23:46:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-01 23:46:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-01 23:46:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-01 23:46:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.20,StartTime:2020-04-01 23:46:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-01 23:46:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://164586f95be5217e9907f33afab50987697ba11a54753cff40252d1c251f5f81,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.20,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:46:14.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2047" for this suite. • [SLOW TEST:23.486 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":33,"skipped":566,"failed":0} [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:46:14.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 1 23:46:14.550: INFO: Waiting up to 5m0s for pod "downwardapi-volume-308fc37c-6ea2-43be-8f8b-69c5209b2c16" in namespace "projected-5076" to be "Succeeded or Failed" Apr 1 23:46:14.559: INFO: Pod "downwardapi-volume-308fc37c-6ea2-43be-8f8b-69c5209b2c16": Phase="Pending", Reason="", readiness=false. Elapsed: 8.628957ms Apr 1 23:46:16.563: INFO: Pod "downwardapi-volume-308fc37c-6ea2-43be-8f8b-69c5209b2c16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012855582s Apr 1 23:46:18.568: INFO: Pod "downwardapi-volume-308fc37c-6ea2-43be-8f8b-69c5209b2c16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017273768s STEP: Saw pod success Apr 1 23:46:18.568: INFO: Pod "downwardapi-volume-308fc37c-6ea2-43be-8f8b-69c5209b2c16" satisfied condition "Succeeded or Failed" Apr 1 23:46:18.571: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-308fc37c-6ea2-43be-8f8b-69c5209b2c16 container client-container: STEP: delete the pod Apr 1 23:46:18.609: INFO: Waiting for pod downwardapi-volume-308fc37c-6ea2-43be-8f8b-69c5209b2c16 to disappear Apr 1 23:46:18.640: INFO: Pod downwardapi-volume-308fc37c-6ea2-43be-8f8b-69c5209b2c16 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:46:18.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5076" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":34,"skipped":566,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:46:18.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 1 23:46:21.842: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:46:21.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2684" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":35,"skipped":587,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:46:21.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:46:27.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6203" for this suite. • [SLOW TEST:5.104 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":36,"skipped":603,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:46:27.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-bb7e568b-84fa-46e1-a317-f9e56ae277a5 in namespace container-probe-3372 Apr 1 23:46:31.178: INFO: Started pod liveness-bb7e568b-84fa-46e1-a317-f9e56ae277a5 in namespace container-probe-3372 STEP: checking the pod's current state and verifying that restartCount is present Apr 1 23:46:31.180: INFO: Initial restart count of pod liveness-bb7e568b-84fa-46e1-a317-f9e56ae277a5 is 0 Apr 1 23:46:49.334: INFO: Restart count of pod container-probe-3372/liveness-bb7e568b-84fa-46e1-a317-f9e56ae277a5 is now 1 (18.153714032s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:46:49.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3372" for this suite. • [SLOW TEST:22.352 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":37,"skipped":620,"failed":0} [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:46:49.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:47:05.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3615" for this suite. • [SLOW TEST:16.325 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":38,"skipped":620,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:47:05.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 1 23:47:13.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 1 23:47:13.844: INFO: Pod pod-with-poststart-exec-hook still exists Apr 1 23:47:15.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 1 23:47:15.849: INFO: Pod pod-with-poststart-exec-hook still exists Apr 1 23:47:17.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 1 23:47:17.849: INFO: Pod pod-with-poststart-exec-hook still exists Apr 1 23:47:19.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 1 23:47:19.849: INFO: Pod pod-with-poststart-exec-hook still exists Apr 1 23:47:21.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 1 23:47:21.848: INFO: Pod pod-with-poststart-exec-hook still exists Apr 1 23:47:23.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 1 23:47:23.847: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:47:23.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7544" for this suite. • [SLOW TEST:18.152 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":39,"skipped":627,"failed":0} SSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:47:23.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 1 23:47:27.945: INFO: &Pod{ObjectMeta:{send-events-6a2a6cb5-7213-4e31-883a-90df860288ae events-3866 /api/v1/namespaces/events-3866/pods/send-events-6a2a6cb5-7213-4e31-883a-90df860288ae 8e13869b-bc37-4ba7-8eb5-6acd110d3de5 4659430 0 2020-04-01 23:47:23 +0000 UTC map[name:foo time:917588179] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l574f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l574f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l574f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-01 23:47:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-01 23:47:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-01 23:47:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-01 23:47:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.154,StartTime:2020-04-01 23:47:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-01 23:47:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://704db9e092f093d34c9ee73f80b2d6fe516bb78855247e625579a5ed6291f9af,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.154,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Apr 1 23:47:29.950: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 1 23:47:31.954: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:47:31.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-3866" for this suite. • [SLOW TEST:8.158 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":275,"completed":40,"skipped":634,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:47:32.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:47:48.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9916" for this suite. • [SLOW TEST:16.242 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":41,"skipped":671,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:47:48.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 1 23:47:48.352: INFO: Waiting up to 5m0s for pod "pod-f165161d-62d6-468e-a362-de20bf51a4ab" in namespace "emptydir-6223" to be "Succeeded or Failed" Apr 1 23:47:48.381: INFO: Pod "pod-f165161d-62d6-468e-a362-de20bf51a4ab": Phase="Pending", Reason="", readiness=false. Elapsed: 29.186515ms Apr 1 23:47:50.408: INFO: Pod "pod-f165161d-62d6-468e-a362-de20bf51a4ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05646475s Apr 1 23:47:52.412: INFO: Pod "pod-f165161d-62d6-468e-a362-de20bf51a4ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06076127s STEP: Saw pod success Apr 1 23:47:52.412: INFO: Pod "pod-f165161d-62d6-468e-a362-de20bf51a4ab" satisfied condition "Succeeded or Failed" Apr 1 23:47:52.416: INFO: Trying to get logs from node latest-worker2 pod pod-f165161d-62d6-468e-a362-de20bf51a4ab container test-container: STEP: delete the pod Apr 1 23:47:52.463: INFO: Waiting for pod pod-f165161d-62d6-468e-a362-de20bf51a4ab to disappear Apr 1 23:47:52.468: INFO: Pod pod-f165161d-62d6-468e-a362-de20bf51a4ab no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:47:52.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6223" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":42,"skipped":672,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:47:52.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-a8b75c95-1e9a-4077-afae-b5be3c493649 STEP: Creating a pod to test consume secrets Apr 1 23:47:52.541: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0d439334-8b65-4e57-8481-00b5b4f617bd" in namespace "projected-8465" to be "Succeeded or Failed" Apr 1 23:47:52.561: INFO: Pod "pod-projected-secrets-0d439334-8b65-4e57-8481-00b5b4f617bd": Phase="Pending", Reason="", readiness=false. Elapsed: 19.161693ms Apr 1 23:47:54.564: INFO: Pod "pod-projected-secrets-0d439334-8b65-4e57-8481-00b5b4f617bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022965746s Apr 1 23:47:56.569: INFO: Pod "pod-projected-secrets-0d439334-8b65-4e57-8481-00b5b4f617bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027475481s STEP: Saw pod success Apr 1 23:47:56.569: INFO: Pod "pod-projected-secrets-0d439334-8b65-4e57-8481-00b5b4f617bd" satisfied condition "Succeeded or Failed" Apr 1 23:47:56.572: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-0d439334-8b65-4e57-8481-00b5b4f617bd container projected-secret-volume-test: STEP: delete the pod Apr 1 23:47:56.589: INFO: Waiting for pod pod-projected-secrets-0d439334-8b65-4e57-8481-00b5b4f617bd to disappear Apr 1 23:47:56.594: INFO: Pod pod-projected-secrets-0d439334-8b65-4e57-8481-00b5b4f617bd no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:47:56.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8465" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":43,"skipped":675,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:47:56.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7172 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-7172 I0401 23:47:56.782933 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7172, replica count: 2 I0401 23:47:59.833437 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0401 23:48:02.833623 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 1 23:48:02.833: INFO: Creating new exec pod Apr 1 23:48:07.847: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-7172 execpod5mb7w -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 1 23:48:10.587: INFO: stderr: "I0401 23:48:10.485077 579 log.go:172] (0xc000800000) (0xc0006bf4a0) Create stream\nI0401 23:48:10.485267 579 log.go:172] (0xc000800000) (0xc0006bf4a0) Stream added, broadcasting: 1\nI0401 23:48:10.488659 579 log.go:172] (0xc000800000) Reply frame received for 1\nI0401 23:48:10.488709 579 log.go:172] (0xc000800000) (0xc0006bf540) Create stream\nI0401 23:48:10.488724 579 log.go:172] (0xc000800000) (0xc0006bf540) Stream added, broadcasting: 3\nI0401 23:48:10.490228 579 log.go:172] (0xc000800000) Reply frame received for 3\nI0401 23:48:10.490277 579 log.go:172] (0xc000800000) (0xc00001c000) Create stream\nI0401 23:48:10.490295 579 log.go:172] (0xc000800000) (0xc00001c000) Stream added, broadcasting: 5\nI0401 23:48:10.491143 579 log.go:172] (0xc000800000) Reply frame received for 5\nI0401 23:48:10.576603 579 log.go:172] (0xc000800000) Data frame received for 5\nI0401 23:48:10.576634 579 log.go:172] (0xc00001c000) (5) Data frame handling\nI0401 23:48:10.576649 579 log.go:172] (0xc00001c000) (5) Data frame sent\nI0401 23:48:10.576656 579 log.go:172] (0xc000800000) Data frame received for 5\nI0401 23:48:10.576660 579 log.go:172] (0xc00001c000) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0401 23:48:10.576669 579 log.go:172] (0xc000800000) Data frame received for 3\nI0401 23:48:10.576728 579 log.go:172] (0xc0006bf540) (3) Data frame handling\nI0401 23:48:10.578891 579 log.go:172] (0xc000800000) Data frame received for 1\nI0401 23:48:10.578923 579 log.go:172] (0xc0006bf4a0) (1) Data frame handling\nI0401 23:48:10.578942 579 log.go:172] (0xc0006bf4a0) (1) Data frame sent\nI0401 23:48:10.578961 579 log.go:172] (0xc000800000) (0xc0006bf4a0) Stream removed, broadcasting: 1\nI0401 23:48:10.579036 579 log.go:172] (0xc000800000) Go away received\nI0401 23:48:10.579310 579 log.go:172] (0xc000800000) (0xc0006bf4a0) Stream removed, broadcasting: 1\nI0401 23:48:10.579330 579 log.go:172] (0xc000800000) (0xc0006bf540) Stream removed, broadcasting: 3\nI0401 23:48:10.579351 579 log.go:172] (0xc000800000) (0xc00001c000) Stream removed, broadcasting: 5\n" Apr 1 23:48:10.587: INFO: stdout: "" Apr 1 23:48:10.588: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-7172 execpod5mb7w -- /bin/sh -x -c nc -zv -t -w 2 10.96.126.133 80' Apr 1 23:48:10.797: INFO: stderr: "I0401 23:48:10.719100 613 log.go:172] (0xc0005258c0) (0xc000988000) Create stream\nI0401 23:48:10.719158 613 log.go:172] (0xc0005258c0) (0xc000988000) Stream added, broadcasting: 1\nI0401 23:48:10.721919 613 log.go:172] (0xc0005258c0) Reply frame received for 1\nI0401 23:48:10.721956 613 log.go:172] (0xc0005258c0) (0xc0009d4000) Create stream\nI0401 23:48:10.721968 613 log.go:172] (0xc0005258c0) (0xc0009d4000) Stream added, broadcasting: 3\nI0401 23:48:10.723037 613 log.go:172] (0xc0005258c0) Reply frame received for 3\nI0401 23:48:10.723066 613 log.go:172] (0xc0005258c0) (0xc0009880a0) Create stream\nI0401 23:48:10.723075 613 log.go:172] (0xc0005258c0) (0xc0009880a0) Stream added, broadcasting: 5\nI0401 23:48:10.723959 613 log.go:172] (0xc0005258c0) Reply frame received for 5\nI0401 23:48:10.791796 613 log.go:172] (0xc0005258c0) Data frame received for 5\nI0401 23:48:10.791825 613 log.go:172] (0xc0009880a0) (5) Data frame handling\nI0401 23:48:10.791834 613 log.go:172] (0xc0009880a0) (5) Data frame sent\nI0401 23:48:10.791840 613 log.go:172] (0xc0005258c0) Data frame received for 5\nI0401 23:48:10.791845 613 log.go:172] (0xc0009880a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.126.133 80\nConnection to 10.96.126.133 80 port [tcp/http] succeeded!\nI0401 23:48:10.791863 613 log.go:172] (0xc0005258c0) Data frame received for 3\nI0401 23:48:10.791869 613 log.go:172] (0xc0009d4000) (3) Data frame handling\nI0401 23:48:10.793500 613 log.go:172] (0xc0005258c0) Data frame received for 1\nI0401 23:48:10.793520 613 log.go:172] (0xc000988000) (1) Data frame handling\nI0401 23:48:10.793544 613 log.go:172] (0xc000988000) (1) Data frame sent\nI0401 23:48:10.793564 613 log.go:172] (0xc0005258c0) (0xc000988000) Stream removed, broadcasting: 1\nI0401 23:48:10.793598 613 log.go:172] (0xc0005258c0) Go away received\nI0401 23:48:10.793896 613 log.go:172] (0xc0005258c0) (0xc000988000) Stream removed, broadcasting: 1\nI0401 23:48:10.793916 613 log.go:172] (0xc0005258c0) (0xc0009d4000) Stream removed, broadcasting: 3\nI0401 23:48:10.793924 613 log.go:172] (0xc0005258c0) (0xc0009880a0) Stream removed, broadcasting: 5\n" Apr 1 23:48:10.797: INFO: stdout: "" Apr 1 23:48:10.797: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:48:10.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7172" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:14.216 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":44,"skipped":688,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:48:10.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test hostPath mode Apr 1 23:48:10.919: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9720" to be "Succeeded or Failed" Apr 1 23:48:10.945: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 26.00987ms Apr 1 23:48:12.953: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033625088s Apr 1 23:48:14.956: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 4.036971208s Apr 1 23:48:16.960: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040449936s STEP: Saw pod success Apr 1 23:48:16.960: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Apr 1 23:48:16.962: INFO: Trying to get logs from node latest-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Apr 1 23:48:17.000: INFO: Waiting for pod pod-host-path-test to disappear Apr 1 23:48:17.031: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:48:17.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-9720" for this suite. • [SLOW TEST:6.208 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":45,"skipped":706,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:48:17.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 1 23:48:17.100: INFO: Waiting up to 5m0s for pod "pod-15a762f0-16d1-4f0b-abd6-12aed64e6612" in namespace "emptydir-4882" to be "Succeeded or Failed" Apr 1 23:48:17.122: INFO: Pod "pod-15a762f0-16d1-4f0b-abd6-12aed64e6612": Phase="Pending", Reason="", readiness=false. Elapsed: 21.872191ms Apr 1 23:48:19.125: INFO: Pod "pod-15a762f0-16d1-4f0b-abd6-12aed64e6612": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025455118s Apr 1 23:48:21.129: INFO: Pod "pod-15a762f0-16d1-4f0b-abd6-12aed64e6612": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029668077s STEP: Saw pod success Apr 1 23:48:21.129: INFO: Pod "pod-15a762f0-16d1-4f0b-abd6-12aed64e6612" satisfied condition "Succeeded or Failed" Apr 1 23:48:21.133: INFO: Trying to get logs from node latest-worker2 pod pod-15a762f0-16d1-4f0b-abd6-12aed64e6612 container test-container: STEP: delete the pod Apr 1 23:48:21.261: INFO: Waiting for pod pod-15a762f0-16d1-4f0b-abd6-12aed64e6612 to disappear Apr 1 23:48:21.268: INFO: Pod pod-15a762f0-16d1-4f0b-abd6-12aed64e6612 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:48:21.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4882" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":46,"skipped":747,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:48:21.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0401 23:48:22.458013 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 1 23:48:22.458: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:48:22.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4747" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":47,"skipped":775,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:48:22.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating all guestbook components Apr 1 23:48:22.508: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Apr 1 23:48:22.508: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4533' Apr 1 23:48:22.845: INFO: stderr: "" Apr 1 23:48:22.845: INFO: stdout: "service/agnhost-slave created\n" Apr 1 23:48:22.846: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Apr 1 23:48:22.846: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4533' Apr 1 23:48:23.536: INFO: stderr: "" Apr 1 23:48:23.536: INFO: stdout: "service/agnhost-master created\n" Apr 1 23:48:23.536: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 1 23:48:23.537: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4533' Apr 1 23:48:24.158: INFO: stderr: "" Apr 1 23:48:24.158: INFO: stdout: "service/frontend created\n" Apr 1 23:48:24.158: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Apr 1 23:48:24.158: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4533' Apr 1 23:48:24.394: INFO: stderr: "" Apr 1 23:48:24.394: INFO: stdout: "deployment.apps/frontend created\n" Apr 1 23:48:24.394: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 1 23:48:24.394: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4533' Apr 1 23:48:24.704: INFO: stderr: "" Apr 1 23:48:24.704: INFO: stdout: "deployment.apps/agnhost-master created\n" Apr 1 23:48:24.704: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 1 23:48:24.704: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4533' Apr 1 23:48:24.956: INFO: stderr: "" Apr 1 23:48:24.956: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Apr 1 23:48:24.956: INFO: Waiting for all frontend pods to be Running. Apr 1 23:48:35.006: INFO: Waiting for frontend to serve content. Apr 1 23:48:35.017: INFO: Trying to add a new entry to the guestbook. Apr 1 23:48:35.027: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Apr 1 23:48:35.053: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4533' Apr 1 23:48:35.198: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 1 23:48:35.198: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Apr 1 23:48:35.199: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4533' Apr 1 23:48:35.322: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 1 23:48:35.323: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 1 23:48:35.323: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4533' Apr 1 23:48:35.461: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 1 23:48:35.461: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 1 23:48:35.462: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4533' Apr 1 23:48:35.551: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 1 23:48:35.551: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 1 23:48:35.551: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4533' Apr 1 23:48:35.666: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 1 23:48:35.666: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 1 23:48:35.666: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4533' Apr 1 23:48:36.044: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 1 23:48:36.044: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:48:36.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4533" for this suite. • [SLOW TEST:13.766 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":275,"completed":48,"skipped":801,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:48:36.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 1 23:48:36.356: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:48:40.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8465" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":49,"skipped":822,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:48:40.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:49:09.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-446" for this suite. • [SLOW TEST:28.615 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":50,"skipped":861,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:49:09.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-820fad14-9fb3-49a1-a0ab-5ca9ff3eed69 STEP: Creating a pod to test consume configMaps Apr 1 23:49:09.117: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5a7b2343-35ec-48cb-9ef1-5aeba390568f" in namespace "projected-4792" to be "Succeeded or Failed" Apr 1 23:49:09.157: INFO: Pod "pod-projected-configmaps-5a7b2343-35ec-48cb-9ef1-5aeba390568f": Phase="Pending", Reason="", readiness=false. Elapsed: 40.426458ms Apr 1 23:49:11.162: INFO: Pod "pod-projected-configmaps-5a7b2343-35ec-48cb-9ef1-5aeba390568f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044828453s Apr 1 23:49:13.169: INFO: Pod "pod-projected-configmaps-5a7b2343-35ec-48cb-9ef1-5aeba390568f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051763615s STEP: Saw pod success Apr 1 23:49:13.169: INFO: Pod "pod-projected-configmaps-5a7b2343-35ec-48cb-9ef1-5aeba390568f" satisfied condition "Succeeded or Failed" Apr 1 23:49:13.172: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-5a7b2343-35ec-48cb-9ef1-5aeba390568f container projected-configmap-volume-test: STEP: delete the pod Apr 1 23:49:13.187: INFO: Waiting for pod pod-projected-configmaps-5a7b2343-35ec-48cb-9ef1-5aeba390568f to disappear Apr 1 23:49:13.192: INFO: Pod pod-projected-configmaps-5a7b2343-35ec-48cb-9ef1-5aeba390568f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:49:13.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4792" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":51,"skipped":871,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:49:13.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 1 23:49:17.884: INFO: Successfully updated pod "labelsupdate3ca083c0-4ebb-418f-a306-bd4705e5bce6" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:49:19.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1237" for this suite. • [SLOW TEST:6.697 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":52,"skipped":876,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:49:19.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 1 23:49:24.526: INFO: Successfully updated pod "pod-update-0eae51c6-8337-4377-8a0e-a97c16a4f956" STEP: verifying the updated pod is in kubernetes Apr 1 23:49:24.534: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:49:24.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8380" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":53,"skipped":904,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:49:24.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 1 23:49:25.456: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 1 23:49:27.466: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381765, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381765, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381765, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381765, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 1 23:49:30.517: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:49:42.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6987" for this suite. STEP: Destroying namespace "webhook-6987-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.326 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":54,"skipped":920,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:49:42.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-projected-all-test-volume-6dccdae5-9e28-4920-8063-645af6af3714 STEP: Creating secret with name secret-projected-all-test-volume-91937767-8438-4761-8129-b5e22b68c3fe STEP: Creating a pod to test Check all projections for projected volume plugin Apr 1 23:49:42.952: INFO: Waiting up to 5m0s for pod "projected-volume-d072e6be-cec1-46d9-ac1b-9ba8b37da5c9" in namespace "projected-3332" to be "Succeeded or Failed" Apr 1 23:49:42.969: INFO: Pod "projected-volume-d072e6be-cec1-46d9-ac1b-9ba8b37da5c9": Phase="Pending", Reason="", readiness=false. Elapsed: 17.350417ms Apr 1 23:49:44.974: INFO: Pod "projected-volume-d072e6be-cec1-46d9-ac1b-9ba8b37da5c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021607975s Apr 1 23:49:46.978: INFO: Pod "projected-volume-d072e6be-cec1-46d9-ac1b-9ba8b37da5c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025712648s STEP: Saw pod success Apr 1 23:49:46.978: INFO: Pod "projected-volume-d072e6be-cec1-46d9-ac1b-9ba8b37da5c9" satisfied condition "Succeeded or Failed" Apr 1 23:49:46.980: INFO: Trying to get logs from node latest-worker pod projected-volume-d072e6be-cec1-46d9-ac1b-9ba8b37da5c9 container projected-all-volume-test: STEP: delete the pod Apr 1 23:49:46.996: INFO: Waiting for pod projected-volume-d072e6be-cec1-46d9-ac1b-9ba8b37da5c9 to disappear Apr 1 23:49:47.001: INFO: Pod projected-volume-d072e6be-cec1-46d9-ac1b-9ba8b37da5c9 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:49:47.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3332" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":55,"skipped":935,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:49:47.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 1 23:49:47.063: INFO: Waiting up to 5m0s for pod "pod-759671c4-2ead-4a22-9d15-ee10c6bf8a7c" in namespace "emptydir-2600" to be "Succeeded or Failed" Apr 1 23:49:47.086: INFO: Pod "pod-759671c4-2ead-4a22-9d15-ee10c6bf8a7c": Phase="Pending", Reason="", readiness=false. Elapsed: 22.814074ms Apr 1 23:49:49.091: INFO: Pod "pod-759671c4-2ead-4a22-9d15-ee10c6bf8a7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027359449s Apr 1 23:49:51.094: INFO: Pod "pod-759671c4-2ead-4a22-9d15-ee10c6bf8a7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031149489s STEP: Saw pod success Apr 1 23:49:51.095: INFO: Pod "pod-759671c4-2ead-4a22-9d15-ee10c6bf8a7c" satisfied condition "Succeeded or Failed" Apr 1 23:49:51.097: INFO: Trying to get logs from node latest-worker2 pod pod-759671c4-2ead-4a22-9d15-ee10c6bf8a7c container test-container: STEP: delete the pod Apr 1 23:49:51.148: INFO: Waiting for pod pod-759671c4-2ead-4a22-9d15-ee10c6bf8a7c to disappear Apr 1 23:49:51.151: INFO: Pod pod-759671c4-2ead-4a22-9d15-ee10c6bf8a7c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:49:51.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2600" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":56,"skipped":939,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:49:51.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-650.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-650.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-650.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-650.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-650.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-650.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 1 23:49:57.258: INFO: DNS probes using dns-650/dns-test-8d352983-f48a-41e2-a126-eeccf9bc795d succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:49:57.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-650" for this suite. • [SLOW TEST:6.151 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":57,"skipped":969,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:49:57.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-9892b93b-3eae-444e-bad3-576a640fbf15 STEP: Creating a pod to test consume secrets Apr 1 23:49:57.613: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bf247120-818a-4703-933d-c5c897721b86" in namespace "projected-5566" to be "Succeeded or Failed" Apr 1 23:49:57.681: INFO: Pod "pod-projected-secrets-bf247120-818a-4703-933d-c5c897721b86": Phase="Pending", Reason="", readiness=false. Elapsed: 67.722839ms Apr 1 23:49:59.714: INFO: Pod "pod-projected-secrets-bf247120-818a-4703-933d-c5c897721b86": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100735764s Apr 1 23:50:01.718: INFO: Pod "pod-projected-secrets-bf247120-818a-4703-933d-c5c897721b86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.104667601s STEP: Saw pod success Apr 1 23:50:01.718: INFO: Pod "pod-projected-secrets-bf247120-818a-4703-933d-c5c897721b86" satisfied condition "Succeeded or Failed" Apr 1 23:50:01.721: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-bf247120-818a-4703-933d-c5c897721b86 container projected-secret-volume-test: STEP: delete the pod Apr 1 23:50:01.737: INFO: Waiting for pod pod-projected-secrets-bf247120-818a-4703-933d-c5c897721b86 to disappear Apr 1 23:50:01.747: INFO: Pod pod-projected-secrets-bf247120-818a-4703-933d-c5c897721b86 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:50:01.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5566" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":58,"skipped":978,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:50:01.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-1309 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 1 23:50:01.820: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 1 23:50:01.881: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 1 23:50:03.884: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 1 23:50:05.885: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 1 23:50:07.885: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 1 23:50:09.885: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 1 23:50:11.885: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 1 23:50:13.885: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 1 23:50:15.885: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 1 23:50:17.885: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 1 23:50:19.885: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 1 23:50:21.885: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 1 23:50:23.885: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 1 23:50:23.891: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 1 23:50:27.914: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.171:8080/dial?request=hostname&protocol=http&host=10.244.2.170&port=8080&tries=1'] Namespace:pod-network-test-1309 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 1 23:50:27.914: INFO: >>> kubeConfig: /root/.kube/config I0401 23:50:27.951917 7 log.go:172] (0xc002c3c4d0) (0xc0009a7900) Create stream I0401 23:50:27.951948 7 log.go:172] (0xc002c3c4d0) (0xc0009a7900) Stream added, broadcasting: 1 I0401 23:50:27.961281 7 log.go:172] (0xc002c3c4d0) Reply frame received for 1 I0401 23:50:27.961349 7 log.go:172] (0xc002c3c4d0) (0xc001312000) Create stream I0401 23:50:27.961366 7 log.go:172] (0xc002c3c4d0) (0xc001312000) Stream added, broadcasting: 3 I0401 23:50:27.963622 7 log.go:172] (0xc002c3c4d0) Reply frame received for 3 I0401 23:50:27.963687 7 log.go:172] (0xc002c3c4d0) (0xc0009a79a0) Create stream I0401 23:50:27.963708 7 log.go:172] (0xc002c3c4d0) (0xc0009a79a0) Stream added, broadcasting: 5 I0401 23:50:27.964740 7 log.go:172] (0xc002c3c4d0) Reply frame received for 5 I0401 23:50:28.026737 7 log.go:172] (0xc002c3c4d0) Data frame received for 3 I0401 23:50:28.026771 7 log.go:172] (0xc001312000) (3) Data frame handling I0401 23:50:28.026792 7 log.go:172] (0xc001312000) (3) Data frame sent I0401 23:50:28.027465 7 log.go:172] (0xc002c3c4d0) Data frame received for 3 I0401 23:50:28.027497 7 log.go:172] (0xc001312000) (3) Data frame handling I0401 23:50:28.027617 7 log.go:172] (0xc002c3c4d0) Data frame received for 5 I0401 23:50:28.027656 7 log.go:172] (0xc0009a79a0) (5) Data frame handling I0401 23:50:28.029589 7 log.go:172] (0xc002c3c4d0) Data frame received for 1 I0401 23:50:28.029605 7 log.go:172] (0xc0009a7900) (1) Data frame handling I0401 23:50:28.029612 7 log.go:172] (0xc0009a7900) (1) Data frame sent I0401 23:50:28.029619 7 log.go:172] (0xc002c3c4d0) (0xc0009a7900) Stream removed, broadcasting: 1 I0401 23:50:28.029627 7 log.go:172] (0xc002c3c4d0) Go away received I0401 23:50:28.030043 7 log.go:172] (0xc002c3c4d0) (0xc0009a7900) Stream removed, broadcasting: 1 I0401 23:50:28.030071 7 log.go:172] (0xc002c3c4d0) (0xc001312000) Stream removed, broadcasting: 3 I0401 23:50:28.030083 7 log.go:172] (0xc002c3c4d0) (0xc0009a79a0) Stream removed, broadcasting: 5 Apr 1 23:50:28.030: INFO: Waiting for responses: map[] Apr 1 23:50:28.033: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.171:8080/dial?request=hostname&protocol=http&host=10.244.1.40&port=8080&tries=1'] Namespace:pod-network-test-1309 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 1 23:50:28.033: INFO: >>> kubeConfig: /root/.kube/config I0401 23:50:28.069555 7 log.go:172] (0xc002e484d0) (0xc002b841e0) Create stream I0401 23:50:28.069583 7 log.go:172] (0xc002e484d0) (0xc002b841e0) Stream added, broadcasting: 1 I0401 23:50:28.071627 7 log.go:172] (0xc002e484d0) Reply frame received for 1 I0401 23:50:28.071672 7 log.go:172] (0xc002e484d0) (0xc0011e6dc0) Create stream I0401 23:50:28.071680 7 log.go:172] (0xc002e484d0) (0xc0011e6dc0) Stream added, broadcasting: 3 I0401 23:50:28.072814 7 log.go:172] (0xc002e484d0) Reply frame received for 3 I0401 23:50:28.072839 7 log.go:172] (0xc002e484d0) (0xc002b84280) Create stream I0401 23:50:28.072845 7 log.go:172] (0xc002e484d0) (0xc002b84280) Stream added, broadcasting: 5 I0401 23:50:28.073760 7 log.go:172] (0xc002e484d0) Reply frame received for 5 I0401 23:50:28.141875 7 log.go:172] (0xc002e484d0) Data frame received for 3 I0401 23:50:28.141921 7 log.go:172] (0xc0011e6dc0) (3) Data frame handling I0401 23:50:28.141968 7 log.go:172] (0xc0011e6dc0) (3) Data frame sent I0401 23:50:28.142454 7 log.go:172] (0xc002e484d0) Data frame received for 5 I0401 23:50:28.142489 7 log.go:172] (0xc002b84280) (5) Data frame handling I0401 23:50:28.142540 7 log.go:172] (0xc002e484d0) Data frame received for 3 I0401 23:50:28.142571 7 log.go:172] (0xc0011e6dc0) (3) Data frame handling I0401 23:50:28.144371 7 log.go:172] (0xc002e484d0) Data frame received for 1 I0401 23:50:28.144391 7 log.go:172] (0xc002b841e0) (1) Data frame handling I0401 23:50:28.144412 7 log.go:172] (0xc002b841e0) (1) Data frame sent I0401 23:50:28.144427 7 log.go:172] (0xc002e484d0) (0xc002b841e0) Stream removed, broadcasting: 1 I0401 23:50:28.144446 7 log.go:172] (0xc002e484d0) Go away received I0401 23:50:28.144523 7 log.go:172] (0xc002e484d0) (0xc002b841e0) Stream removed, broadcasting: 1 I0401 23:50:28.144548 7 log.go:172] (0xc002e484d0) (0xc0011e6dc0) Stream removed, broadcasting: 3 I0401 23:50:28.144562 7 log.go:172] (0xc002e484d0) (0xc002b84280) Stream removed, broadcasting: 5 Apr 1 23:50:28.144: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:50:28.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1309" for this suite. • [SLOW TEST:26.399 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":59,"skipped":1019,"failed":0} [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:50:28.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0401 23:51:08.354427 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 1 23:51:08.354: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:51:08.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1956" for this suite. • [SLOW TEST:40.206 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":60,"skipped":1019,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:51:08.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override all Apr 1 23:51:08.426: INFO: Waiting up to 5m0s for pod "client-containers-d8f3f42f-03a3-4a8c-a992-1ae2abab5ebd" in namespace "containers-7508" to be "Succeeded or Failed" Apr 1 23:51:08.428: INFO: Pod "client-containers-d8f3f42f-03a3-4a8c-a992-1ae2abab5ebd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113785ms Apr 1 23:51:10.433: INFO: Pod "client-containers-d8f3f42f-03a3-4a8c-a992-1ae2abab5ebd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00718043s Apr 1 23:51:12.438: INFO: Pod "client-containers-d8f3f42f-03a3-4a8c-a992-1ae2abab5ebd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011679964s STEP: Saw pod success Apr 1 23:51:12.438: INFO: Pod "client-containers-d8f3f42f-03a3-4a8c-a992-1ae2abab5ebd" satisfied condition "Succeeded or Failed" Apr 1 23:51:12.441: INFO: Trying to get logs from node latest-worker pod client-containers-d8f3f42f-03a3-4a8c-a992-1ae2abab5ebd container test-container: STEP: delete the pod Apr 1 23:51:12.460: INFO: Waiting for pod client-containers-d8f3f42f-03a3-4a8c-a992-1ae2abab5ebd to disappear Apr 1 23:51:12.497: INFO: Pod client-containers-d8f3f42f-03a3-4a8c-a992-1ae2abab5ebd no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:51:12.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7508" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":61,"skipped":1028,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:51:12.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 1 23:51:13.010: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 1 23:51:15.403: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381873, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381873, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381873, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721381872, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 1 23:51:18.494: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:51:18.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8971" for this suite. STEP: Destroying namespace "webhook-8971-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.140 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":62,"skipped":1058,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:51:18.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Apr 1 23:51:25.609: INFO: 9 pods remaining Apr 1 23:51:25.609: INFO: 0 pods has nil DeletionTimestamp Apr 1 23:51:25.609: INFO: Apr 1 23:51:26.212: INFO: 0 pods remaining Apr 1 23:51:26.212: INFO: 0 pods has nil DeletionTimestamp Apr 1 23:51:26.212: INFO: STEP: Gathering metrics W0401 23:51:27.818111 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 1 23:51:27.818: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:51:27.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2319" for this suite. • [SLOW TEST:9.658 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":63,"skipped":1065,"failed":0} SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:51:28.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-e1d1e501-0461-495c-81f3-85c7d34a7b9b STEP: Creating a pod to test consume secrets Apr 1 23:51:29.013: INFO: Waiting up to 5m0s for pod "pod-secrets-34c57f8c-a6e8-4e0b-a1de-0a259400ad76" in namespace "secrets-2456" to be "Succeeded or Failed" Apr 1 23:51:29.108: INFO: Pod "pod-secrets-34c57f8c-a6e8-4e0b-a1de-0a259400ad76": Phase="Pending", Reason="", readiness=false. Elapsed: 95.597192ms Apr 1 23:51:31.113: INFO: Pod "pod-secrets-34c57f8c-a6e8-4e0b-a1de-0a259400ad76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100185253s Apr 1 23:51:33.117: INFO: Pod "pod-secrets-34c57f8c-a6e8-4e0b-a1de-0a259400ad76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.104522888s STEP: Saw pod success Apr 1 23:51:33.117: INFO: Pod "pod-secrets-34c57f8c-a6e8-4e0b-a1de-0a259400ad76" satisfied condition "Succeeded or Failed" Apr 1 23:51:33.121: INFO: Trying to get logs from node latest-worker pod pod-secrets-34c57f8c-a6e8-4e0b-a1de-0a259400ad76 container secret-volume-test: STEP: delete the pod Apr 1 23:51:33.137: INFO: Waiting for pod pod-secrets-34c57f8c-a6e8-4e0b-a1de-0a259400ad76 to disappear Apr 1 23:51:33.148: INFO: Pod pod-secrets-34c57f8c-a6e8-4e0b-a1de-0a259400ad76 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:51:33.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2456" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":64,"skipped":1071,"failed":0} SSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:51:33.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating server pod server in namespace prestop-2182 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-2182 STEP: Deleting pre-stop pod Apr 1 23:51:46.268: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:51:46.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-2182" for this suite. • [SLOW TEST:13.135 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":275,"completed":65,"skipped":1078,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:51:46.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 1 23:51:46.366: INFO: Waiting up to 5m0s for pod "downward-api-eba8226d-1c31-4119-bb38-2794ead39904" in namespace "downward-api-6266" to be "Succeeded or Failed" Apr 1 23:51:46.374: INFO: Pod "downward-api-eba8226d-1c31-4119-bb38-2794ead39904": Phase="Pending", Reason="", readiness=false. Elapsed: 7.733818ms Apr 1 23:51:48.378: INFO: Pod "downward-api-eba8226d-1c31-4119-bb38-2794ead39904": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011966072s Apr 1 23:51:50.382: INFO: Pod "downward-api-eba8226d-1c31-4119-bb38-2794ead39904": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016186037s STEP: Saw pod success Apr 1 23:51:50.382: INFO: Pod "downward-api-eba8226d-1c31-4119-bb38-2794ead39904" satisfied condition "Succeeded or Failed" Apr 1 23:51:50.385: INFO: Trying to get logs from node latest-worker2 pod downward-api-eba8226d-1c31-4119-bb38-2794ead39904 container dapi-container: STEP: delete the pod Apr 1 23:51:50.428: INFO: Waiting for pod downward-api-eba8226d-1c31-4119-bb38-2794ead39904 to disappear Apr 1 23:51:50.449: INFO: Pod downward-api-eba8226d-1c31-4119-bb38-2794ead39904 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:51:50.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6266" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":66,"skipped":1095,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:51:50.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test env composition Apr 1 23:51:50.558: INFO: Waiting up to 5m0s for pod "var-expansion-0031d26d-d8b3-42e4-ac58-1dbb9b732c67" in namespace "var-expansion-1336" to be "Succeeded or Failed" Apr 1 23:51:50.561: INFO: Pod "var-expansion-0031d26d-d8b3-42e4-ac58-1dbb9b732c67": Phase="Pending", Reason="", readiness=false. Elapsed: 3.058849ms Apr 1 23:51:52.621: INFO: Pod "var-expansion-0031d26d-d8b3-42e4-ac58-1dbb9b732c67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062157368s Apr 1 23:51:54.625: INFO: Pod "var-expansion-0031d26d-d8b3-42e4-ac58-1dbb9b732c67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066449176s STEP: Saw pod success Apr 1 23:51:54.625: INFO: Pod "var-expansion-0031d26d-d8b3-42e4-ac58-1dbb9b732c67" satisfied condition "Succeeded or Failed" Apr 1 23:51:54.628: INFO: Trying to get logs from node latest-worker2 pod var-expansion-0031d26d-d8b3-42e4-ac58-1dbb9b732c67 container dapi-container: STEP: delete the pod Apr 1 23:51:54.651: INFO: Waiting for pod var-expansion-0031d26d-d8b3-42e4-ac58-1dbb9b732c67 to disappear Apr 1 23:51:54.693: INFO: Pod var-expansion-0031d26d-d8b3-42e4-ac58-1dbb9b732c67 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:51:54.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1336" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":67,"skipped":1109,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:51:54.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-d281c77d-37a6-408e-972d-df865d15dc72 STEP: Creating a pod to test consume configMaps Apr 1 23:51:54.761: INFO: Waiting up to 5m0s for pod "pod-configmaps-d04e7468-2b65-4a03-93a0-678143eb510f" in namespace "configmap-2836" to be "Succeeded or Failed" Apr 1 23:51:54.771: INFO: Pod "pod-configmaps-d04e7468-2b65-4a03-93a0-678143eb510f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.275941ms Apr 1 23:51:56.774: INFO: Pod "pod-configmaps-d04e7468-2b65-4a03-93a0-678143eb510f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013177204s Apr 1 23:51:58.777: INFO: Pod "pod-configmaps-d04e7468-2b65-4a03-93a0-678143eb510f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016472807s STEP: Saw pod success Apr 1 23:51:58.778: INFO: Pod "pod-configmaps-d04e7468-2b65-4a03-93a0-678143eb510f" satisfied condition "Succeeded or Failed" Apr 1 23:51:58.780: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-d04e7468-2b65-4a03-93a0-678143eb510f container configmap-volume-test: STEP: delete the pod Apr 1 23:51:58.809: INFO: Waiting for pod pod-configmaps-d04e7468-2b65-4a03-93a0-678143eb510f to disappear Apr 1 23:51:58.819: INFO: Pod pod-configmaps-d04e7468-2b65-4a03-93a0-678143eb510f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:51:58.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2836" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":68,"skipped":1161,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:51:58.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:52:09.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9819" for this suite. • [SLOW TEST:11.104 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":69,"skipped":1173,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:52:09.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:53:09.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7160" for this suite. • [SLOW TEST:60.079 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":70,"skipped":1187,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:53:10.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 1 23:53:10.106: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 23:53:10.124: INFO: Number of nodes with available pods: 0 Apr 1 23:53:10.124: INFO: Node latest-worker is running more than one daemon pod Apr 1 23:53:11.129: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 23:53:11.133: INFO: Number of nodes with available pods: 0 Apr 1 23:53:11.133: INFO: Node latest-worker is running more than one daemon pod Apr 1 23:53:12.150: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 23:53:12.153: INFO: Number of nodes with available pods: 0 Apr 1 23:53:12.153: INFO: Node latest-worker is running more than one daemon pod Apr 1 23:53:13.150: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 23:53:13.154: INFO: Number of nodes with available pods: 0 Apr 1 23:53:13.154: INFO: Node latest-worker is running more than one daemon pod Apr 1 23:53:14.131: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 23:53:14.137: INFO: Number of nodes with available pods: 2 Apr 1 23:53:14.137: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 1 23:53:14.180: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 23:53:14.186: INFO: Number of nodes with available pods: 1 Apr 1 23:53:14.186: INFO: Node latest-worker is running more than one daemon pod Apr 1 23:53:15.299: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 23:53:15.303: INFO: Number of nodes with available pods: 1 Apr 1 23:53:15.303: INFO: Node latest-worker is running more than one daemon pod Apr 1 23:53:16.252: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 23:53:16.256: INFO: Number of nodes with available pods: 1 Apr 1 23:53:16.256: INFO: Node latest-worker is running more than one daemon pod Apr 1 23:53:17.190: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 23:53:17.194: INFO: Number of nodes with available pods: 1 Apr 1 23:53:17.194: INFO: Node latest-worker is running more than one daemon pod Apr 1 23:53:18.190: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 23:53:18.193: INFO: Number of nodes with available pods: 2 Apr 1 23:53:18.193: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-777, will wait for the garbage collector to delete the pods Apr 1 23:53:18.277: INFO: Deleting DaemonSet.extensions daemon-set took: 25.359843ms Apr 1 23:53:18.577: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.360199ms Apr 1 23:53:32.881: INFO: Number of nodes with available pods: 0 Apr 1 23:53:32.881: INFO: Number of running nodes: 0, number of available pods: 0 Apr 1 23:53:32.888: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-777/daemonsets","resourceVersion":"4662119"},"items":null} Apr 1 23:53:32.890: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-777/pods","resourceVersion":"4662119"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:53:32.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-777" for this suite. • [SLOW TEST:22.899 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":71,"skipped":1195,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:53:32.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service endpoint-test2 in namespace services-5303 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5303 to expose endpoints map[] Apr 1 23:53:33.011: INFO: Get endpoints failed (5.784881ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Apr 1 23:53:34.015: INFO: successfully validated that service endpoint-test2 in namespace services-5303 exposes endpoints map[] (1.009569575s elapsed) STEP: Creating pod pod1 in namespace services-5303 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5303 to expose endpoints map[pod1:[80]] Apr 1 23:53:37.055: INFO: successfully validated that service endpoint-test2 in namespace services-5303 exposes endpoints map[pod1:[80]] (3.032663011s elapsed) STEP: Creating pod pod2 in namespace services-5303 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5303 to expose endpoints map[pod1:[80] pod2:[80]] Apr 1 23:53:40.278: INFO: successfully validated that service endpoint-test2 in namespace services-5303 exposes endpoints map[pod1:[80] pod2:[80]] (3.219538377s elapsed) STEP: Deleting pod pod1 in namespace services-5303 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5303 to expose endpoints map[pod2:[80]] Apr 1 23:53:41.301: INFO: successfully validated that service endpoint-test2 in namespace services-5303 exposes endpoints map[pod2:[80]] (1.018362385s elapsed) STEP: Deleting pod pod2 in namespace services-5303 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5303 to expose endpoints map[] Apr 1 23:53:42.322: INFO: successfully validated that service endpoint-test2 in namespace services-5303 exposes endpoints map[] (1.01511098s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:53:42.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5303" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:9.543 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":275,"completed":72,"skipped":1205,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:53:42.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 1 23:53:43.446: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 1 23:53:45.465: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721382023, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721382023, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721382023, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721382023, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 1 23:53:48.498: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:53:48.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6556" for this suite. STEP: Destroying namespace "webhook-6556-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.334 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":73,"skipped":1239,"failed":0} SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:53:48.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 1 23:53:48.848: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 1 23:53:48.885: INFO: Waiting for terminating namespaces to be deleted... Apr 1 23:53:48.888: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 1 23:53:48.909: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 1 23:53:48.909: INFO: Container kube-proxy ready: true, restart count 0 Apr 1 23:53:48.909: INFO: pod2 from services-5303 started at 2020-04-01 23:53:37 +0000 UTC (1 container statuses recorded) Apr 1 23:53:48.909: INFO: Container pause ready: false, restart count 0 Apr 1 23:53:48.909: INFO: sample-webhook-deployment-6cc9cc9dc-j4qv7 from webhook-6556 started at 2020-04-01 23:53:43 +0000 UTC (1 container statuses recorded) Apr 1 23:53:48.909: INFO: Container sample-webhook ready: true, restart count 0 Apr 1 23:53:48.909: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 1 23:53:48.909: INFO: Container kindnet-cni ready: true, restart count 0 Apr 1 23:53:48.909: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 1 23:53:48.926: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 1 23:53:48.926: INFO: Container kindnet-cni ready: true, restart count 0 Apr 1 23:53:48.926: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 1 23:53:48.926: INFO: Container kube-proxy ready: true, restart count 0 Apr 1 23:53:48.926: INFO: pod1 from services-5303 started at 2020-04-01 23:53:34 +0000 UTC (1 container statuses recorded) Apr 1 23:53:48.926: INFO: Container pause ready: false, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-a3781781-965e-4852-8689-2200d8aaf94c 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-a3781781-965e-4852-8689-2200d8aaf94c off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-a3781781-965e-4852-8689-2200d8aaf94c [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:58:57.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4266" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:308.363 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":74,"skipped":1242,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:58:57.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-lcm8m in namespace proxy-5419 I0401 23:58:57.269898 7 runners.go:190] Created replication controller with name: proxy-service-lcm8m, namespace: proxy-5419, replica count: 1 I0401 23:58:58.320412 7 runners.go:190] proxy-service-lcm8m Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0401 23:58:59.320678 7 runners.go:190] proxy-service-lcm8m Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0401 23:59:00.320944 7 runners.go:190] proxy-service-lcm8m Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0401 23:59:01.321374 7 runners.go:190] proxy-service-lcm8m Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0401 23:59:02.321628 7 runners.go:190] proxy-service-lcm8m Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0401 23:59:03.321854 7 runners.go:190] proxy-service-lcm8m Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 1 23:59:03.324: INFO: setup took 6.138291098s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 1 23:59:03.331: INFO: (0) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd/proxy/: test (200; 6.791129ms) Apr 1 23:59:03.331: INFO: (0) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:1080/proxy/: test<... (200; 6.902378ms) Apr 1 23:59:03.332: INFO: (0) /api/v1/namespaces/proxy-5419/services/http:proxy-service-lcm8m:portname1/proxy/: foo (200; 7.490906ms) Apr 1 23:59:03.332: INFO: (0) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:1080/proxy/: ... (200; 7.393507ms) Apr 1 23:59:03.332: INFO: (0) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:160/proxy/: foo (200; 7.421866ms) Apr 1 23:59:03.332: INFO: (0) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:160/proxy/: foo (200; 7.368183ms) Apr 1 23:59:03.333: INFO: (0) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:162/proxy/: bar (200; 8.586398ms) Apr 1 23:59:03.333: INFO: (0) /api/v1/namespaces/proxy-5419/services/proxy-service-lcm8m:portname2/proxy/: bar (200; 8.839682ms) Apr 1 23:59:03.334: INFO: (0) /api/v1/namespaces/proxy-5419/services/proxy-service-lcm8m:portname1/proxy/: foo (200; 9.639255ms) Apr 1 23:59:03.334: INFO: (0) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:162/proxy/: bar (200; 9.636895ms) Apr 1 23:59:03.334: INFO: (0) /api/v1/namespaces/proxy-5419/services/http:proxy-service-lcm8m:portname2/proxy/: bar (200; 9.64213ms) Apr 1 23:59:03.340: INFO: (0) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:462/proxy/: tls qux (200; 15.897046ms) Apr 1 23:59:03.340: INFO: (0) /api/v1/namespaces/proxy-5419/services/https:proxy-service-lcm8m:tlsportname2/proxy/: tls qux (200; 15.960503ms) Apr 1 23:59:03.341: INFO: (0) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:460/proxy/: tls baz (200; 17.141875ms) Apr 1 23:59:03.342: INFO: (0) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:443/proxy/: ... (200; 3.013081ms) Apr 1 23:59:03.345: INFO: (1) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:462/proxy/: tls qux (200; 3.133875ms) Apr 1 23:59:03.353: INFO: (1) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:160/proxy/: foo (200; 11.560795ms) Apr 1 23:59:03.354: INFO: (1) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:160/proxy/: foo (200; 12.133811ms) Apr 1 23:59:03.354: INFO: (1) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:460/proxy/: tls baz (200; 12.169241ms) Apr 1 23:59:03.354: INFO: (1) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd/proxy/: test (200; 12.343616ms) Apr 1 23:59:03.354: INFO: (1) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:162/proxy/: bar (200; 12.733213ms) Apr 1 23:59:03.356: INFO: (1) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:1080/proxy/: test<... (200; 14.06146ms) Apr 1 23:59:03.357: INFO: (1) /api/v1/namespaces/proxy-5419/services/proxy-service-lcm8m:portname1/proxy/: foo (200; 15.461622ms) Apr 1 23:59:03.357: INFO: (1) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:443/proxy/: test<... (200; 5.023979ms) Apr 1 23:59:03.365: INFO: (2) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:1080/proxy/: ... (200; 5.085402ms) Apr 1 23:59:03.365: INFO: (2) /api/v1/namespaces/proxy-5419/services/http:proxy-service-lcm8m:portname2/proxy/: bar (200; 5.229081ms) Apr 1 23:59:03.365: INFO: (2) /api/v1/namespaces/proxy-5419/services/http:proxy-service-lcm8m:portname1/proxy/: foo (200; 5.196803ms) Apr 1 23:59:03.365: INFO: (2) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:460/proxy/: tls baz (200; 5.260162ms) Apr 1 23:59:03.365: INFO: (2) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd/proxy/: test (200; 5.289725ms) Apr 1 23:59:03.365: INFO: (2) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:160/proxy/: foo (200; 5.315269ms) Apr 1 23:59:03.365: INFO: (2) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:162/proxy/: bar (200; 5.334923ms) Apr 1 23:59:03.365: INFO: (2) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:443/proxy/: test (200; 3.22525ms) Apr 1 23:59:03.370: INFO: (3) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:162/proxy/: bar (200; 3.305264ms) Apr 1 23:59:03.370: INFO: (3) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:1080/proxy/: ... (200; 3.302367ms) Apr 1 23:59:03.370: INFO: (3) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:160/proxy/: foo (200; 3.493005ms) Apr 1 23:59:03.370: INFO: (3) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:1080/proxy/: test<... (200; 3.257154ms) Apr 1 23:59:03.370: INFO: (3) /api/v1/namespaces/proxy-5419/services/http:proxy-service-lcm8m:portname1/proxy/: foo (200; 3.64225ms) Apr 1 23:59:03.370: INFO: (3) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:443/proxy/: ... (200; 3.28355ms) Apr 1 23:59:03.375: INFO: (4) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:443/proxy/: test (200; 3.973687ms) Apr 1 23:59:03.375: INFO: (4) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:460/proxy/: tls baz (200; 4.038221ms) Apr 1 23:59:03.375: INFO: (4) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:462/proxy/: tls qux (200; 4.030655ms) Apr 1 23:59:03.375: INFO: (4) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:162/proxy/: bar (200; 4.050806ms) Apr 1 23:59:03.375: INFO: (4) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:160/proxy/: foo (200; 4.098292ms) Apr 1 23:59:03.375: INFO: (4) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:162/proxy/: bar (200; 4.18262ms) Apr 1 23:59:03.375: INFO: (4) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:1080/proxy/: test<... (200; 4.100142ms) Apr 1 23:59:03.375: INFO: (4) /api/v1/namespaces/proxy-5419/services/https:proxy-service-lcm8m:tlsportname2/proxy/: tls qux (200; 4.218508ms) Apr 1 23:59:03.377: INFO: (4) /api/v1/namespaces/proxy-5419/services/https:proxy-service-lcm8m:tlsportname1/proxy/: tls baz (200; 5.59845ms) Apr 1 23:59:03.377: INFO: (4) /api/v1/namespaces/proxy-5419/services/proxy-service-lcm8m:portname2/proxy/: bar (200; 5.563309ms) Apr 1 23:59:03.377: INFO: (4) /api/v1/namespaces/proxy-5419/services/proxy-service-lcm8m:portname1/proxy/: foo (200; 5.861822ms) Apr 1 23:59:03.377: INFO: (4) /api/v1/namespaces/proxy-5419/services/http:proxy-service-lcm8m:portname2/proxy/: bar (200; 5.872708ms) Apr 1 23:59:03.377: INFO: (4) /api/v1/namespaces/proxy-5419/services/http:proxy-service-lcm8m:portname1/proxy/: foo (200; 5.984146ms) Apr 1 23:59:03.380: INFO: (5) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:162/proxy/: bar (200; 2.93825ms) Apr 1 23:59:03.380: INFO: (5) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:160/proxy/: foo (200; 2.916039ms) Apr 1 23:59:03.381: INFO: (5) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:460/proxy/: tls baz (200; 3.22322ms) Apr 1 23:59:03.381: INFO: (5) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:1080/proxy/: test<... (200; 3.119504ms) Apr 1 23:59:03.381: INFO: (5) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:162/proxy/: bar (200; 3.754435ms) Apr 1 23:59:03.381: INFO: (5) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:443/proxy/: ... (200; 3.655285ms) Apr 1 23:59:03.381: INFO: (5) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd/proxy/: test (200; 3.649573ms) Apr 1 23:59:03.382: INFO: (5) /api/v1/namespaces/proxy-5419/services/proxy-service-lcm8m:portname2/proxy/: bar (200; 4.853443ms) Apr 1 23:59:03.382: INFO: (5) /api/v1/namespaces/proxy-5419/services/proxy-service-lcm8m:portname1/proxy/: foo (200; 4.941627ms) Apr 1 23:59:03.382: INFO: (5) /api/v1/namespaces/proxy-5419/services/http:proxy-service-lcm8m:portname1/proxy/: foo (200; 4.947692ms) Apr 1 23:59:03.382: INFO: (5) /api/v1/namespaces/proxy-5419/services/https:proxy-service-lcm8m:tlsportname2/proxy/: tls qux (200; 5.044817ms) Apr 1 23:59:03.382: INFO: (5) /api/v1/namespaces/proxy-5419/services/https:proxy-service-lcm8m:tlsportname1/proxy/: tls baz (200; 5.003587ms) Apr 1 23:59:03.383: INFO: (5) /api/v1/namespaces/proxy-5419/services/http:proxy-service-lcm8m:portname2/proxy/: bar (200; 5.808738ms) Apr 1 23:59:03.385: INFO: (6) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:162/proxy/: bar (200; 1.856325ms) Apr 1 23:59:03.387: INFO: (6) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:443/proxy/: test (200; 3.942561ms) Apr 1 23:59:03.387: INFO: (6) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:462/proxy/: tls qux (200; 4.012309ms) Apr 1 23:59:03.387: INFO: (6) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:160/proxy/: foo (200; 3.962925ms) Apr 1 23:59:03.387: INFO: (6) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:162/proxy/: bar (200; 4.09308ms) Apr 1 23:59:03.392: INFO: (6) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:1080/proxy/: ... (200; 8.382394ms) Apr 1 23:59:03.392: INFO: (6) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:1080/proxy/: test<... (200; 8.361637ms) Apr 1 23:59:03.392: INFO: (6) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:160/proxy/: foo (200; 8.433102ms) Apr 1 23:59:03.392: INFO: (6) /api/v1/namespaces/proxy-5419/services/http:proxy-service-lcm8m:portname2/proxy/: bar (200; 8.478429ms) Apr 1 23:59:03.392: INFO: (6) /api/v1/namespaces/proxy-5419/services/proxy-service-lcm8m:portname1/proxy/: foo (200; 8.739331ms) Apr 1 23:59:03.392: INFO: (6) /api/v1/namespaces/proxy-5419/services/proxy-service-lcm8m:portname2/proxy/: bar (200; 8.674282ms) Apr 1 23:59:03.392: INFO: (6) /api/v1/namespaces/proxy-5419/services/http:proxy-service-lcm8m:portname1/proxy/: foo (200; 8.692549ms) Apr 1 23:59:03.392: INFO: (6) /api/v1/namespaces/proxy-5419/services/https:proxy-service-lcm8m:tlsportname1/proxy/: tls baz (200; 8.851936ms) Apr 1 23:59:03.392: INFO: (6) /api/v1/namespaces/proxy-5419/services/https:proxy-service-lcm8m:tlsportname2/proxy/: tls qux (200; 8.866502ms) Apr 1 23:59:03.397: INFO: (7) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:1080/proxy/: ... (200; 4.529034ms) Apr 1 23:59:03.397: INFO: (7) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:1080/proxy/: test<... (200; 4.679844ms) Apr 1 23:59:03.397: INFO: (7) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:162/proxy/: bar (200; 4.674869ms) Apr 1 23:59:03.397: INFO: (7) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:162/proxy/: bar (200; 4.675827ms) Apr 1 23:59:03.397: INFO: (7) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:160/proxy/: foo (200; 4.77095ms) Apr 1 23:59:03.397: INFO: (7) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:460/proxy/: tls baz (200; 4.850679ms) Apr 1 23:59:03.397: INFO: (7) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:160/proxy/: foo (200; 4.836801ms) Apr 1 23:59:03.397: INFO: (7) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:462/proxy/: tls qux (200; 4.787357ms) Apr 1 23:59:03.397: INFO: (7) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd/proxy/: test (200; 4.956553ms) Apr 1 23:59:03.397: INFO: (7) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:443/proxy/: ... (200; 3.472074ms) Apr 1 23:59:03.402: INFO: (8) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:162/proxy/: bar (200; 3.535416ms) Apr 1 23:59:03.402: INFO: (8) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:160/proxy/: foo (200; 3.645699ms) Apr 1 23:59:03.403: INFO: (8) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd/proxy/: test (200; 3.898943ms) Apr 1 23:59:03.403: INFO: (8) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:443/proxy/: test<... (200; 7.726225ms) Apr 1 23:59:03.407: INFO: (8) /api/v1/namespaces/proxy-5419/services/http:proxy-service-lcm8m:portname2/proxy/: bar (200; 7.760776ms) Apr 1 23:59:03.407: INFO: (8) /api/v1/namespaces/proxy-5419/services/proxy-service-lcm8m:portname1/proxy/: foo (200; 7.846218ms) Apr 1 23:59:03.407: INFO: (8) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:462/proxy/: tls qux (200; 7.839808ms) Apr 1 23:59:03.407: INFO: (8) /api/v1/namespaces/proxy-5419/services/https:proxy-service-lcm8m:tlsportname1/proxy/: tls baz (200; 7.828858ms) Apr 1 23:59:03.407: INFO: (8) /api/v1/namespaces/proxy-5419/services/https:proxy-service-lcm8m:tlsportname2/proxy/: tls qux (200; 7.811218ms) Apr 1 23:59:03.407: INFO: (8) /api/v1/namespaces/proxy-5419/services/proxy-service-lcm8m:portname2/proxy/: bar (200; 7.916389ms) Apr 1 23:59:03.407: INFO: (8) /api/v1/namespaces/proxy-5419/services/http:proxy-service-lcm8m:portname1/proxy/: foo (200; 7.831521ms) Apr 1 23:59:03.407: INFO: (8) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:162/proxy/: bar (200; 7.879359ms) Apr 1 23:59:03.410: INFO: (9) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:162/proxy/: bar (200; 3.173766ms) Apr 1 23:59:03.410: INFO: (9) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:160/proxy/: foo (200; 3.206585ms) Apr 1 23:59:03.410: INFO: (9) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:1080/proxy/: ... (200; 3.43061ms) Apr 1 23:59:03.410: INFO: (9) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:462/proxy/: tls qux (200; 3.38965ms) Apr 1 23:59:03.410: INFO: (9) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:160/proxy/: foo (200; 3.471994ms) Apr 1 23:59:03.411: INFO: (9) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:1080/proxy/: test<... (200; 3.843598ms) Apr 1 23:59:03.411: INFO: (9) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd/proxy/: test (200; 3.896137ms) Apr 1 23:59:03.411: INFO: (9) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:162/proxy/: bar (200; 3.939219ms) Apr 1 23:59:03.411: INFO: (9) /api/v1/namespaces/proxy-5419/services/https:proxy-service-lcm8m:tlsportname1/proxy/: tls baz (200; 3.96091ms) Apr 1 23:59:03.411: INFO: (9) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:443/proxy/: test<... (200; 17.316204ms) Apr 1 23:59:03.429: INFO: (10) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:460/proxy/: tls baz (200; 17.34317ms) Apr 1 23:59:03.429: INFO: (10) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:1080/proxy/: ... (200; 17.373352ms) Apr 1 23:59:03.429: INFO: (10) /api/v1/namespaces/proxy-5419/services/http:proxy-service-lcm8m:portname2/proxy/: bar (200; 17.402051ms) Apr 1 23:59:03.429: INFO: (10) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd/proxy/: test (200; 17.436409ms) Apr 1 23:59:03.429: INFO: (10) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:443/proxy/: ... (200; 8.374074ms) Apr 1 23:59:03.439: INFO: (11) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:162/proxy/: bar (200; 8.367732ms) Apr 1 23:59:03.439: INFO: (11) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:160/proxy/: foo (200; 8.382125ms) Apr 1 23:59:03.439: INFO: (11) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:1080/proxy/: test<... (200; 8.421912ms) Apr 1 23:59:03.439: INFO: (11) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:160/proxy/: foo (200; 8.497682ms) Apr 1 23:59:03.439: INFO: (11) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:460/proxy/: tls baz (200; 8.666925ms) Apr 1 23:59:03.439: INFO: (11) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd/proxy/: test (200; 8.635673ms) Apr 1 23:59:03.439: INFO: (11) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:443/proxy/: test<... (200; 3.34641ms) Apr 1 23:59:03.446: INFO: (12) /api/v1/namespaces/proxy-5419/services/http:proxy-service-lcm8m:portname2/proxy/: bar (200; 4.336775ms) Apr 1 23:59:03.446: INFO: (12) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:1080/proxy/: ... (200; 4.279094ms) Apr 1 23:59:03.446: INFO: (12) /api/v1/namespaces/proxy-5419/services/https:proxy-service-lcm8m:tlsportname1/proxy/: tls baz (200; 4.310815ms) Apr 1 23:59:03.446: INFO: (12) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:443/proxy/: test (200; 4.311773ms) Apr 1 23:59:03.446: INFO: (12) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:162/proxy/: bar (200; 4.445811ms) Apr 1 23:59:03.446: INFO: (12) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:160/proxy/: foo (200; 4.363428ms) Apr 1 23:59:03.446: INFO: (12) /api/v1/namespaces/proxy-5419/services/proxy-service-lcm8m:portname1/proxy/: foo (200; 4.402981ms) Apr 1 23:59:03.446: INFO: (12) /api/v1/namespaces/proxy-5419/services/proxy-service-lcm8m:portname2/proxy/: bar (200; 4.404211ms) Apr 1 23:59:03.446: INFO: (12) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:460/proxy/: tls baz (200; 4.455707ms) Apr 1 23:59:03.446: INFO: (12) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:162/proxy/: bar (200; 4.415549ms) Apr 1 23:59:03.446: INFO: (12) /api/v1/namespaces/proxy-5419/services/http:proxy-service-lcm8m:portname1/proxy/: foo (200; 4.432843ms) Apr 1 23:59:03.448: INFO: (13) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:1080/proxy/: test<... (200; 2.196755ms) Apr 1 23:59:03.449: INFO: (13) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:162/proxy/: bar (200; 3.189141ms) Apr 1 23:59:03.449: INFO: (13) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:1080/proxy/: ... (200; 3.162617ms) Apr 1 23:59:03.449: INFO: (13) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:162/proxy/: bar (200; 3.282894ms) Apr 1 23:59:03.450: INFO: (13) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:462/proxy/: tls qux (200; 4.204859ms) Apr 1 23:59:03.451: INFO: (13) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:460/proxy/: tls baz (200; 4.653707ms) Apr 1 23:59:03.451: INFO: (13) /api/v1/namespaces/proxy-5419/services/proxy-service-lcm8m:portname2/proxy/: bar (200; 4.729257ms) Apr 1 23:59:03.451: INFO: (13) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:160/proxy/: foo (200; 4.718498ms) Apr 1 23:59:03.451: INFO: (13) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:443/proxy/: test (200; 5.543059ms) Apr 1 23:59:03.452: INFO: (13) /api/v1/namespaces/proxy-5419/services/http:proxy-service-lcm8m:portname1/proxy/: foo (200; 6.204581ms) Apr 1 23:59:03.452: INFO: (13) /api/v1/namespaces/proxy-5419/services/https:proxy-service-lcm8m:tlsportname1/proxy/: tls baz (200; 6.224728ms) Apr 1 23:59:03.452: INFO: (13) /api/v1/namespaces/proxy-5419/services/https:proxy-service-lcm8m:tlsportname2/proxy/: tls qux (200; 6.229983ms) Apr 1 23:59:03.454: INFO: (14) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:443/proxy/: test (200; 4.353318ms) Apr 1 23:59:03.457: INFO: (14) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:1080/proxy/: test<... (200; 4.408313ms) Apr 1 23:59:03.457: INFO: (14) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:1080/proxy/: ... (200; 4.480266ms) Apr 1 23:59:03.457: INFO: (14) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:160/proxy/: foo (200; 4.485839ms) Apr 1 23:59:03.457: INFO: (14) /api/v1/namespaces/proxy-5419/services/https:proxy-service-lcm8m:tlsportname1/proxy/: tls baz (200; 4.759568ms) Apr 1 23:59:03.458: INFO: (14) /api/v1/namespaces/proxy-5419/services/http:proxy-service-lcm8m:portname1/proxy/: foo (200; 5.161584ms) Apr 1 23:59:03.458: INFO: (14) /api/v1/namespaces/proxy-5419/services/http:proxy-service-lcm8m:portname2/proxy/: bar (200; 5.425564ms) Apr 1 23:59:03.459: INFO: (14) /api/v1/namespaces/proxy-5419/services/proxy-service-lcm8m:portname2/proxy/: bar (200; 6.438294ms) Apr 1 23:59:03.459: INFO: (14) /api/v1/namespaces/proxy-5419/services/https:proxy-service-lcm8m:tlsportname2/proxy/: tls qux (200; 6.470321ms) Apr 1 23:59:03.459: INFO: (14) /api/v1/namespaces/proxy-5419/services/proxy-service-lcm8m:portname1/proxy/: foo (200; 6.497081ms) Apr 1 23:59:03.461: INFO: (15) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:162/proxy/: bar (200; 2.219606ms) Apr 1 23:59:03.463: INFO: (15) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:160/proxy/: foo (200; 3.761663ms) Apr 1 23:59:03.463: INFO: (15) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:160/proxy/: foo (200; 3.866047ms) Apr 1 23:59:03.463: INFO: (15) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:162/proxy/: bar (200; 3.917763ms) Apr 1 23:59:03.463: INFO: (15) /api/v1/namespaces/proxy-5419/services/http:proxy-service-lcm8m:portname1/proxy/: foo (200; 3.991284ms) Apr 1 23:59:03.464: INFO: (15) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:1080/proxy/: ... (200; 4.37546ms) Apr 1 23:59:03.464: INFO: (15) /api/v1/namespaces/proxy-5419/services/proxy-service-lcm8m:portname1/proxy/: foo (200; 4.573175ms) Apr 1 23:59:03.464: INFO: (15) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:460/proxy/: tls baz (200; 4.597097ms) Apr 1 23:59:03.464: INFO: (15) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:1080/proxy/: test<... (200; 4.562689ms) Apr 1 23:59:03.464: INFO: (15) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd/proxy/: test (200; 4.549951ms) Apr 1 23:59:03.464: INFO: (15) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:462/proxy/: tls qux (200; 4.592616ms) Apr 1 23:59:03.464: INFO: (15) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:443/proxy/: test<... (200; 2.728137ms) Apr 1 23:59:03.467: INFO: (16) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd/proxy/: test (200; 2.798588ms) Apr 1 23:59:03.468: INFO: (16) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:162/proxy/: bar (200; 3.646327ms) Apr 1 23:59:03.469: INFO: (16) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:460/proxy/: tls baz (200; 4.446181ms) Apr 1 23:59:03.472: INFO: (16) /api/v1/namespaces/proxy-5419/services/proxy-service-lcm8m:portname2/proxy/: bar (200; 8.020853ms) Apr 1 23:59:03.473: INFO: (16) /api/v1/namespaces/proxy-5419/services/https:proxy-service-lcm8m:tlsportname1/proxy/: tls baz (200; 8.271019ms) Apr 1 23:59:03.473: INFO: (16) /api/v1/namespaces/proxy-5419/services/proxy-service-lcm8m:portname1/proxy/: foo (200; 8.708699ms) Apr 1 23:59:03.473: INFO: (16) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:160/proxy/: foo (200; 8.124121ms) Apr 1 23:59:03.473: INFO: (16) /api/v1/namespaces/proxy-5419/services/http:proxy-service-lcm8m:portname1/proxy/: foo (200; 8.721507ms) Apr 1 23:59:03.473: INFO: (16) /api/v1/namespaces/proxy-5419/services/http:proxy-service-lcm8m:portname2/proxy/: bar (200; 8.970138ms) Apr 1 23:59:03.473: INFO: (16) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:462/proxy/: tls qux (200; 8.83543ms) Apr 1 23:59:03.473: INFO: (16) /api/v1/namespaces/proxy-5419/services/https:proxy-service-lcm8m:tlsportname2/proxy/: tls qux (200; 9.081668ms) Apr 1 23:59:03.473: INFO: (16) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:443/proxy/: ... (200; 8.827639ms) Apr 1 23:59:03.477: INFO: (17) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:462/proxy/: tls qux (200; 3.310737ms) Apr 1 23:59:03.478: INFO: (17) /api/v1/namespaces/proxy-5419/services/proxy-service-lcm8m:portname1/proxy/: foo (200; 3.111765ms) Apr 1 23:59:03.478: INFO: (17) /api/v1/namespaces/proxy-5419/services/https:proxy-service-lcm8m:tlsportname1/proxy/: tls baz (200; 3.783121ms) Apr 1 23:59:03.478: INFO: (17) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:460/proxy/: tls baz (200; 3.204743ms) Apr 1 23:59:03.478: INFO: (17) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:160/proxy/: foo (200; 3.33565ms) Apr 1 23:59:03.478: INFO: (17) /api/v1/namespaces/proxy-5419/services/proxy-service-lcm8m:portname2/proxy/: bar (200; 4.119411ms) Apr 1 23:59:03.478: INFO: (17) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:162/proxy/: bar (200; 4.3004ms) Apr 1 23:59:03.478: INFO: (17) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd/proxy/: test (200; 4.187017ms) Apr 1 23:59:03.478: INFO: (17) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:1080/proxy/: ... (200; 4.469789ms) Apr 1 23:59:03.479: INFO: (17) /api/v1/namespaces/proxy-5419/services/http:proxy-service-lcm8m:portname1/proxy/: foo (200; 4.80292ms) Apr 1 23:59:03.479: INFO: (17) /api/v1/namespaces/proxy-5419/services/https:proxy-service-lcm8m:tlsportname2/proxy/: tls qux (200; 5.09967ms) Apr 1 23:59:03.479: INFO: (17) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:160/proxy/: foo (200; 4.64222ms) Apr 1 23:59:03.479: INFO: (17) /api/v1/namespaces/proxy-5419/services/http:proxy-service-lcm8m:portname2/proxy/: bar (200; 4.007069ms) Apr 1 23:59:03.479: INFO: (17) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:1080/proxy/: test<... (200; 4.355099ms) Apr 1 23:59:03.479: INFO: (17) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:443/proxy/: test (200; 4.954186ms) Apr 1 23:59:03.484: INFO: (18) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:460/proxy/: tls baz (200; 4.968733ms) Apr 1 23:59:03.484: INFO: (18) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:1080/proxy/: ... (200; 5.037986ms) Apr 1 23:59:03.484: INFO: (18) /api/v1/namespaces/proxy-5419/services/https:proxy-service-lcm8m:tlsportname2/proxy/: tls qux (200; 5.117564ms) Apr 1 23:59:03.484: INFO: (18) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:160/proxy/: foo (200; 5.144893ms) Apr 1 23:59:03.484: INFO: (18) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:1080/proxy/: test<... (200; 5.065112ms) Apr 1 23:59:03.484: INFO: (18) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:462/proxy/: tls qux (200; 5.11143ms) Apr 1 23:59:03.484: INFO: (18) /api/v1/namespaces/proxy-5419/services/proxy-service-lcm8m:portname2/proxy/: bar (200; 5.209286ms) Apr 1 23:59:03.484: INFO: (18) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:162/proxy/: bar (200; 5.265842ms) Apr 1 23:59:03.484: INFO: (18) /api/v1/namespaces/proxy-5419/services/https:proxy-service-lcm8m:tlsportname1/proxy/: tls baz (200; 5.424567ms) Apr 1 23:59:03.484: INFO: (18) /api/v1/namespaces/proxy-5419/services/http:proxy-service-lcm8m:portname2/proxy/: bar (200; 5.37365ms) Apr 1 23:59:03.484: INFO: (18) /api/v1/namespaces/proxy-5419/services/proxy-service-lcm8m:portname1/proxy/: foo (200; 5.488051ms) Apr 1 23:59:03.484: INFO: (18) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:443/proxy/: ... (200; 3.371606ms) Apr 1 23:59:03.488: INFO: (19) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:1080/proxy/: test<... (200; 3.576668ms) Apr 1 23:59:03.488: INFO: (19) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:460/proxy/: tls baz (200; 3.793662ms) Apr 1 23:59:03.488: INFO: (19) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:162/proxy/: bar (200; 3.697789ms) Apr 1 23:59:03.488: INFO: (19) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd/proxy/: test (200; 3.785871ms) Apr 1 23:59:03.488: INFO: (19) /api/v1/namespaces/proxy-5419/pods/proxy-service-lcm8m-vjjpd:160/proxy/: foo (200; 3.780266ms) Apr 1 23:59:03.488: INFO: (19) /api/v1/namespaces/proxy-5419/services/https:proxy-service-lcm8m:tlsportname2/proxy/: tls qux (200; 3.77895ms) Apr 1 23:59:03.488: INFO: (19) /api/v1/namespaces/proxy-5419/pods/http:proxy-service-lcm8m-vjjpd:160/proxy/: foo (200; 3.869396ms) Apr 1 23:59:03.489: INFO: (19) /api/v1/namespaces/proxy-5419/pods/https:proxy-service-lcm8m-vjjpd:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 1 23:59:12.822: INFO: Waiting up to 5m0s for pod "downward-api-a8e111be-181c-47b2-8291-6bbd7170854e" in namespace "downward-api-2199" to be "Succeeded or Failed" Apr 1 23:59:12.838: INFO: Pod "downward-api-a8e111be-181c-47b2-8291-6bbd7170854e": Phase="Pending", Reason="", readiness=false. Elapsed: 15.768828ms Apr 1 23:59:14.841: INFO: Pod "downward-api-a8e111be-181c-47b2-8291-6bbd7170854e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019216136s Apr 1 23:59:16.846: INFO: Pod "downward-api-a8e111be-181c-47b2-8291-6bbd7170854e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023658009s STEP: Saw pod success Apr 1 23:59:16.846: INFO: Pod "downward-api-a8e111be-181c-47b2-8291-6bbd7170854e" satisfied condition "Succeeded or Failed" Apr 1 23:59:16.849: INFO: Trying to get logs from node latest-worker pod downward-api-a8e111be-181c-47b2-8291-6bbd7170854e container dapi-container: STEP: delete the pod Apr 1 23:59:16.887: INFO: Waiting for pod downward-api-a8e111be-181c-47b2-8291-6bbd7170854e to disappear Apr 1 23:59:16.891: INFO: Pod downward-api-a8e111be-181c-47b2-8291-6bbd7170854e no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:59:16.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2199" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":76,"skipped":1286,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:59:16.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 1 23:59:16.974: INFO: Waiting up to 5m0s for pod "pod-1ea7dbd7-c969-41b8-8bce-2dce6fe8f334" in namespace "emptydir-749" to be "Succeeded or Failed" Apr 1 23:59:16.982: INFO: Pod "pod-1ea7dbd7-c969-41b8-8bce-2dce6fe8f334": Phase="Pending", Reason="", readiness=false. Elapsed: 8.324214ms Apr 1 23:59:18.987: INFO: Pod "pod-1ea7dbd7-c969-41b8-8bce-2dce6fe8f334": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01352247s Apr 1 23:59:20.992: INFO: Pod "pod-1ea7dbd7-c969-41b8-8bce-2dce6fe8f334": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017692785s STEP: Saw pod success Apr 1 23:59:20.992: INFO: Pod "pod-1ea7dbd7-c969-41b8-8bce-2dce6fe8f334" satisfied condition "Succeeded or Failed" Apr 1 23:59:20.995: INFO: Trying to get logs from node latest-worker pod pod-1ea7dbd7-c969-41b8-8bce-2dce6fe8f334 container test-container: STEP: delete the pod Apr 1 23:59:21.054: INFO: Waiting for pod pod-1ea7dbd7-c969-41b8-8bce-2dce6fe8f334 to disappear Apr 1 23:59:21.060: INFO: Pod pod-1ea7dbd7-c969-41b8-8bce-2dce6fe8f334 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:59:21.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-749" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":77,"skipped":1290,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:59:21.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service nodeport-test with type=NodePort in namespace services-7140 STEP: creating replication controller nodeport-test in namespace services-7140 I0401 23:59:21.216411 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-7140, replica count: 2 I0401 23:59:24.266816 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0401 23:59:27.267047 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 1 23:59:27.267: INFO: Creating new exec pod Apr 1 23:59:32.291: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-7140 execpodm8tgq -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Apr 1 23:59:34.882: INFO: stderr: "I0401 23:59:34.776475 885 log.go:172] (0xc000958b00) (0xc0006cf7c0) Create stream\nI0401 23:59:34.776517 885 log.go:172] (0xc000958b00) (0xc0006cf7c0) Stream added, broadcasting: 1\nI0401 23:59:34.778750 885 log.go:172] (0xc000958b00) Reply frame received for 1\nI0401 23:59:34.778793 885 log.go:172] (0xc000958b00) (0xc000631680) Create stream\nI0401 23:59:34.778804 885 log.go:172] (0xc000958b00) (0xc000631680) Stream added, broadcasting: 3\nI0401 23:59:34.779576 885 log.go:172] (0xc000958b00) Reply frame received for 3\nI0401 23:59:34.779611 885 log.go:172] (0xc000958b00) (0xc0006cf860) Create stream\nI0401 23:59:34.779627 885 log.go:172] (0xc000958b00) (0xc0006cf860) Stream added, broadcasting: 5\nI0401 23:59:34.780387 885 log.go:172] (0xc000958b00) Reply frame received for 5\nI0401 23:59:34.875550 885 log.go:172] (0xc000958b00) Data frame received for 5\nI0401 23:59:34.875602 885 log.go:172] (0xc0006cf860) (5) Data frame handling\nI0401 23:59:34.875643 885 log.go:172] (0xc0006cf860) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0401 23:59:34.875858 885 log.go:172] (0xc000958b00) Data frame received for 5\nI0401 23:59:34.875902 885 log.go:172] (0xc0006cf860) (5) Data frame handling\nI0401 23:59:34.875938 885 log.go:172] (0xc0006cf860) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0401 23:59:34.876245 885 log.go:172] (0xc000958b00) Data frame received for 3\nI0401 23:59:34.876273 885 log.go:172] (0xc000631680) (3) Data frame handling\nI0401 23:59:34.876301 885 log.go:172] (0xc000958b00) Data frame received for 5\nI0401 23:59:34.876316 885 log.go:172] (0xc0006cf860) (5) Data frame handling\nI0401 23:59:34.878191 885 log.go:172] (0xc000958b00) Data frame received for 1\nI0401 23:59:34.878207 885 log.go:172] (0xc0006cf7c0) (1) Data frame handling\nI0401 23:59:34.878227 885 log.go:172] (0xc0006cf7c0) (1) Data frame sent\nI0401 23:59:34.878338 885 log.go:172] (0xc000958b00) (0xc0006cf7c0) Stream removed, broadcasting: 1\nI0401 23:59:34.878679 885 log.go:172] (0xc000958b00) (0xc0006cf7c0) Stream removed, broadcasting: 1\nI0401 23:59:34.878698 885 log.go:172] (0xc000958b00) (0xc000631680) Stream removed, broadcasting: 3\nI0401 23:59:34.878708 885 log.go:172] (0xc000958b00) (0xc0006cf860) Stream removed, broadcasting: 5\nI0401 23:59:34.878773 885 log.go:172] (0xc000958b00) Go away received\n" Apr 1 23:59:34.882: INFO: stdout: "" Apr 1 23:59:34.883: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-7140 execpodm8tgq -- /bin/sh -x -c nc -zv -t -w 2 10.96.231.228 80' Apr 1 23:59:35.097: INFO: stderr: "I0401 23:59:35.022439 916 log.go:172] (0xc00003a6e0) (0xc0006e32c0) Create stream\nI0401 23:59:35.022498 916 log.go:172] (0xc00003a6e0) (0xc0006e32c0) Stream added, broadcasting: 1\nI0401 23:59:35.024447 916 log.go:172] (0xc00003a6e0) Reply frame received for 1\nI0401 23:59:35.024492 916 log.go:172] (0xc00003a6e0) (0xc000ada000) Create stream\nI0401 23:59:35.024507 916 log.go:172] (0xc00003a6e0) (0xc000ada000) Stream added, broadcasting: 3\nI0401 23:59:35.025400 916 log.go:172] (0xc00003a6e0) Reply frame received for 3\nI0401 23:59:35.025444 916 log.go:172] (0xc00003a6e0) (0xc0006e34a0) Create stream\nI0401 23:59:35.025461 916 log.go:172] (0xc00003a6e0) (0xc0006e34a0) Stream added, broadcasting: 5\nI0401 23:59:35.026252 916 log.go:172] (0xc00003a6e0) Reply frame received for 5\nI0401 23:59:35.091186 916 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0401 23:59:35.091220 916 log.go:172] (0xc000ada000) (3) Data frame handling\nI0401 23:59:35.091253 916 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0401 23:59:35.091264 916 log.go:172] (0xc0006e34a0) (5) Data frame handling\nI0401 23:59:35.091277 916 log.go:172] (0xc0006e34a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.231.228 80\nConnection to 10.96.231.228 80 port [tcp/http] succeeded!\nI0401 23:59:35.091389 916 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0401 23:59:35.091440 916 log.go:172] (0xc0006e34a0) (5) Data frame handling\nI0401 23:59:35.092632 916 log.go:172] (0xc00003a6e0) Data frame received for 1\nI0401 23:59:35.092649 916 log.go:172] (0xc0006e32c0) (1) Data frame handling\nI0401 23:59:35.092660 916 log.go:172] (0xc0006e32c0) (1) Data frame sent\nI0401 23:59:35.092836 916 log.go:172] (0xc00003a6e0) (0xc0006e32c0) Stream removed, broadcasting: 1\nI0401 23:59:35.092878 916 log.go:172] (0xc00003a6e0) Go away received\nI0401 23:59:35.093530 916 log.go:172] (0xc00003a6e0) (0xc0006e32c0) Stream removed, broadcasting: 1\nI0401 23:59:35.093554 916 log.go:172] (0xc00003a6e0) (0xc000ada000) Stream removed, broadcasting: 3\nI0401 23:59:35.093569 916 log.go:172] (0xc00003a6e0) (0xc0006e34a0) Stream removed, broadcasting: 5\n" Apr 1 23:59:35.097: INFO: stdout: "" Apr 1 23:59:35.098: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-7140 execpodm8tgq -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30456' Apr 1 23:59:35.316: INFO: stderr: "I0401 23:59:35.234347 938 log.go:172] (0xc0009d0f20) (0xc0009d4460) Create stream\nI0401 23:59:35.234401 938 log.go:172] (0xc0009d0f20) (0xc0009d4460) Stream added, broadcasting: 1\nI0401 23:59:35.241763 938 log.go:172] (0xc0009d0f20) Reply frame received for 1\nI0401 23:59:35.241796 938 log.go:172] (0xc0009d0f20) (0xc000544a00) Create stream\nI0401 23:59:35.241805 938 log.go:172] (0xc0009d0f20) (0xc000544a00) Stream added, broadcasting: 3\nI0401 23:59:35.244832 938 log.go:172] (0xc0009d0f20) Reply frame received for 3\nI0401 23:59:35.244871 938 log.go:172] (0xc0009d0f20) (0xc0009d4000) Create stream\nI0401 23:59:35.244893 938 log.go:172] (0xc0009d0f20) (0xc0009d4000) Stream added, broadcasting: 5\nI0401 23:59:35.246482 938 log.go:172] (0xc0009d0f20) Reply frame received for 5\nI0401 23:59:35.309449 938 log.go:172] (0xc0009d0f20) Data frame received for 5\nI0401 23:59:35.309474 938 log.go:172] (0xc0009d4000) (5) Data frame handling\nI0401 23:59:35.309487 938 log.go:172] (0xc0009d4000) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 30456\nConnection to 172.17.0.13 30456 port [tcp/30456] succeeded!\nI0401 23:59:35.309638 938 log.go:172] (0xc0009d0f20) Data frame received for 3\nI0401 23:59:35.309653 938 log.go:172] (0xc000544a00) (3) Data frame handling\nI0401 23:59:35.309956 938 log.go:172] (0xc0009d0f20) Data frame received for 5\nI0401 23:59:35.309966 938 log.go:172] (0xc0009d4000) (5) Data frame handling\nI0401 23:59:35.311558 938 log.go:172] (0xc0009d0f20) Data frame received for 1\nI0401 23:59:35.311583 938 log.go:172] (0xc0009d4460) (1) Data frame handling\nI0401 23:59:35.311609 938 log.go:172] (0xc0009d4460) (1) Data frame sent\nI0401 23:59:35.311632 938 log.go:172] (0xc0009d0f20) (0xc0009d4460) Stream removed, broadcasting: 1\nI0401 23:59:35.311704 938 log.go:172] (0xc0009d0f20) Go away received\nI0401 23:59:35.312041 938 log.go:172] (0xc0009d0f20) (0xc0009d4460) Stream removed, broadcasting: 1\nI0401 23:59:35.312066 938 log.go:172] (0xc0009d0f20) (0xc000544a00) Stream removed, broadcasting: 3\nI0401 23:59:35.312079 938 log.go:172] (0xc0009d0f20) (0xc0009d4000) Stream removed, broadcasting: 5\n" Apr 1 23:59:35.316: INFO: stdout: "" Apr 1 23:59:35.316: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-7140 execpodm8tgq -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30456' Apr 1 23:59:35.517: INFO: stderr: "I0401 23:59:35.449638 961 log.go:172] (0xc0000e0bb0) (0xc0009e6000) Create stream\nI0401 23:59:35.449709 961 log.go:172] (0xc0000e0bb0) (0xc0009e6000) Stream added, broadcasting: 1\nI0401 23:59:35.452622 961 log.go:172] (0xc0000e0bb0) Reply frame received for 1\nI0401 23:59:35.452683 961 log.go:172] (0xc0000e0bb0) (0xc0009e60a0) Create stream\nI0401 23:59:35.452705 961 log.go:172] (0xc0000e0bb0) (0xc0009e60a0) Stream added, broadcasting: 3\nI0401 23:59:35.453935 961 log.go:172] (0xc0000e0bb0) Reply frame received for 3\nI0401 23:59:35.453989 961 log.go:172] (0xc0000e0bb0) (0xc0007dd2c0) Create stream\nI0401 23:59:35.454002 961 log.go:172] (0xc0000e0bb0) (0xc0007dd2c0) Stream added, broadcasting: 5\nI0401 23:59:35.455036 961 log.go:172] (0xc0000e0bb0) Reply frame received for 5\nI0401 23:59:35.509810 961 log.go:172] (0xc0000e0bb0) Data frame received for 3\nI0401 23:59:35.509833 961 log.go:172] (0xc0009e60a0) (3) Data frame handling\nI0401 23:59:35.509885 961 log.go:172] (0xc0000e0bb0) Data frame received for 5\nI0401 23:59:35.509946 961 log.go:172] (0xc0007dd2c0) (5) Data frame handling\nI0401 23:59:35.509977 961 log.go:172] (0xc0007dd2c0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 30456\nConnection to 172.17.0.12 30456 port [tcp/30456] succeeded!\nI0401 23:59:35.510127 961 log.go:172] (0xc0000e0bb0) Data frame received for 5\nI0401 23:59:35.510143 961 log.go:172] (0xc0007dd2c0) (5) Data frame handling\nI0401 23:59:35.511945 961 log.go:172] (0xc0000e0bb0) Data frame received for 1\nI0401 23:59:35.511980 961 log.go:172] (0xc0009e6000) (1) Data frame handling\nI0401 23:59:35.512004 961 log.go:172] (0xc0009e6000) (1) Data frame sent\nI0401 23:59:35.512033 961 log.go:172] (0xc0000e0bb0) (0xc0009e6000) Stream removed, broadcasting: 1\nI0401 23:59:35.512058 961 log.go:172] (0xc0000e0bb0) Go away received\nI0401 23:59:35.512620 961 log.go:172] (0xc0000e0bb0) (0xc0009e6000) Stream removed, broadcasting: 1\nI0401 23:59:35.512662 961 log.go:172] (0xc0000e0bb0) (0xc0009e60a0) Stream removed, broadcasting: 3\nI0401 23:59:35.512692 961 log.go:172] (0xc0000e0bb0) (0xc0007dd2c0) Stream removed, broadcasting: 5\n" Apr 1 23:59:35.517: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:59:35.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7140" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:14.458 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":78,"skipped":1303,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:59:35.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-5bdb491d-e6b4-4835-a59c-845d5645cd82 STEP: Creating a pod to test consume configMaps Apr 1 23:59:35.616: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0a6a08bd-bbac-4bf7-98e3-f7f2a3acb892" in namespace "projected-2894" to be "Succeeded or Failed" Apr 1 23:59:35.639: INFO: Pod "pod-projected-configmaps-0a6a08bd-bbac-4bf7-98e3-f7f2a3acb892": Phase="Pending", Reason="", readiness=false. Elapsed: 22.400894ms Apr 1 23:59:37.665: INFO: Pod "pod-projected-configmaps-0a6a08bd-bbac-4bf7-98e3-f7f2a3acb892": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048916063s Apr 1 23:59:39.669: INFO: Pod "pod-projected-configmaps-0a6a08bd-bbac-4bf7-98e3-f7f2a3acb892": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052560694s STEP: Saw pod success Apr 1 23:59:39.669: INFO: Pod "pod-projected-configmaps-0a6a08bd-bbac-4bf7-98e3-f7f2a3acb892" satisfied condition "Succeeded or Failed" Apr 1 23:59:39.672: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-0a6a08bd-bbac-4bf7-98e3-f7f2a3acb892 container projected-configmap-volume-test: STEP: delete the pod Apr 1 23:59:39.720: INFO: Waiting for pod pod-projected-configmaps-0a6a08bd-bbac-4bf7-98e3-f7f2a3acb892 to disappear Apr 1 23:59:39.724: INFO: Pod pod-projected-configmaps-0a6a08bd-bbac-4bf7-98e3-f7f2a3acb892 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:59:39.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2894" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":79,"skipped":1326,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:59:39.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-9eaed98d-6f8b-49a3-bcb2-6247d9a56897 STEP: Creating a pod to test consume secrets Apr 1 23:59:39.798: INFO: Waiting up to 5m0s for pod "pod-secrets-68ad14fb-fbd3-4e00-a682-407ceedd849b" in namespace "secrets-805" to be "Succeeded or Failed" Apr 1 23:59:39.843: INFO: Pod "pod-secrets-68ad14fb-fbd3-4e00-a682-407ceedd849b": Phase="Pending", Reason="", readiness=false. Elapsed: 44.851347ms Apr 1 23:59:41.900: INFO: Pod "pod-secrets-68ad14fb-fbd3-4e00-a682-407ceedd849b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101774867s Apr 1 23:59:43.904: INFO: Pod "pod-secrets-68ad14fb-fbd3-4e00-a682-407ceedd849b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.106173413s STEP: Saw pod success Apr 1 23:59:43.904: INFO: Pod "pod-secrets-68ad14fb-fbd3-4e00-a682-407ceedd849b" satisfied condition "Succeeded or Failed" Apr 1 23:59:43.907: INFO: Trying to get logs from node latest-worker pod pod-secrets-68ad14fb-fbd3-4e00-a682-407ceedd849b container secret-volume-test: STEP: delete the pod Apr 1 23:59:43.926: INFO: Waiting for pod pod-secrets-68ad14fb-fbd3-4e00-a682-407ceedd849b to disappear Apr 1 23:59:43.929: INFO: Pod pod-secrets-68ad14fb-fbd3-4e00-a682-407ceedd849b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 1 23:59:43.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-805" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":80,"skipped":1352,"failed":0} S ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 1 23:59:43.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 1 23:59:44.626: INFO: Pod name wrapped-volume-race-ae6473d4-f742-44fd-86d8-3b3baf29d9eb: Found 0 pods out of 5 Apr 1 23:59:49.634: INFO: Pod name wrapped-volume-race-ae6473d4-f742-44fd-86d8-3b3baf29d9eb: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-ae6473d4-f742-44fd-86d8-3b3baf29d9eb in namespace emptydir-wrapper-4811, will wait for the garbage collector to delete the pods Apr 2 00:00:03.747: INFO: Deleting ReplicationController wrapped-volume-race-ae6473d4-f742-44fd-86d8-3b3baf29d9eb took: 12.171725ms Apr 2 00:00:04.048: INFO: Terminating ReplicationController wrapped-volume-race-ae6473d4-f742-44fd-86d8-3b3baf29d9eb pods took: 300.427868ms STEP: Creating RC which spawns configmap-volume pods Apr 2 00:00:13.226: INFO: Pod name wrapped-volume-race-a206d175-436b-4a90-8ce7-1c33d27e983d: Found 0 pods out of 5 Apr 2 00:00:18.234: INFO: Pod name wrapped-volume-race-a206d175-436b-4a90-8ce7-1c33d27e983d: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-a206d175-436b-4a90-8ce7-1c33d27e983d in namespace emptydir-wrapper-4811, will wait for the garbage collector to delete the pods Apr 2 00:00:32.315: INFO: Deleting ReplicationController wrapped-volume-race-a206d175-436b-4a90-8ce7-1c33d27e983d took: 7.368966ms Apr 2 00:00:32.616: INFO: Terminating ReplicationController wrapped-volume-race-a206d175-436b-4a90-8ce7-1c33d27e983d pods took: 300.287158ms STEP: Creating RC which spawns configmap-volume pods Apr 2 00:00:42.853: INFO: Pod name wrapped-volume-race-bb7707ac-706b-486a-8d8e-8e288948c1c0: Found 0 pods out of 5 Apr 2 00:00:47.859: INFO: Pod name wrapped-volume-race-bb7707ac-706b-486a-8d8e-8e288948c1c0: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-bb7707ac-706b-486a-8d8e-8e288948c1c0 in namespace emptydir-wrapper-4811, will wait for the garbage collector to delete the pods Apr 2 00:01:01.941: INFO: Deleting ReplicationController wrapped-volume-race-bb7707ac-706b-486a-8d8e-8e288948c1c0 took: 7.692711ms Apr 2 00:01:02.341: INFO: Terminating ReplicationController wrapped-volume-race-bb7707ac-706b-486a-8d8e-8e288948c1c0 pods took: 400.362756ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:01:14.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-4811" for this suite. • [SLOW TEST:90.465 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":81,"skipped":1353,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:01:14.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:01:14.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5398" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":82,"skipped":1380,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:01:14.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:01:31.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-72" for this suite. • [SLOW TEST:17.100 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":83,"skipped":1381,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:01:31.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 2 00:01:31.782: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1c179224-8fbd-4a7e-b638-a12bf8bde720" in namespace "projected-6120" to be "Succeeded or Failed" Apr 2 00:01:31.808: INFO: Pod "downwardapi-volume-1c179224-8fbd-4a7e-b638-a12bf8bde720": Phase="Pending", Reason="", readiness=false. Elapsed: 26.29459ms Apr 2 00:01:33.812: INFO: Pod "downwardapi-volume-1c179224-8fbd-4a7e-b638-a12bf8bde720": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030408871s Apr 2 00:01:35.816: INFO: Pod "downwardapi-volume-1c179224-8fbd-4a7e-b638-a12bf8bde720": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034435482s STEP: Saw pod success Apr 2 00:01:35.816: INFO: Pod "downwardapi-volume-1c179224-8fbd-4a7e-b638-a12bf8bde720" satisfied condition "Succeeded or Failed" Apr 2 00:01:35.818: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-1c179224-8fbd-4a7e-b638-a12bf8bde720 container client-container: STEP: delete the pod Apr 2 00:01:35.860: INFO: Waiting for pod downwardapi-volume-1c179224-8fbd-4a7e-b638-a12bf8bde720 to disappear Apr 2 00:01:35.869: INFO: Pod downwardapi-volume-1c179224-8fbd-4a7e-b638-a12bf8bde720 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:01:35.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6120" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":84,"skipped":1399,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:01:35.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 2 00:01:35.926: INFO: Waiting up to 5m0s for pod "downwardapi-volume-76b7e858-81e7-42a0-8766-ac9e46d7da80" in namespace "downward-api-113" to be "Succeeded or Failed" Apr 2 00:01:35.930: INFO: Pod "downwardapi-volume-76b7e858-81e7-42a0-8766-ac9e46d7da80": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019446ms Apr 2 00:01:37.934: INFO: Pod "downwardapi-volume-76b7e858-81e7-42a0-8766-ac9e46d7da80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008455552s Apr 2 00:01:39.938: INFO: Pod "downwardapi-volume-76b7e858-81e7-42a0-8766-ac9e46d7da80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012559053s STEP: Saw pod success Apr 2 00:01:39.938: INFO: Pod "downwardapi-volume-76b7e858-81e7-42a0-8766-ac9e46d7da80" satisfied condition "Succeeded or Failed" Apr 2 00:01:39.942: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-76b7e858-81e7-42a0-8766-ac9e46d7da80 container client-container: STEP: delete the pod Apr 2 00:01:39.975: INFO: Waiting for pod downwardapi-volume-76b7e858-81e7-42a0-8766-ac9e46d7da80 to disappear Apr 2 00:01:40.024: INFO: Pod downwardapi-volume-76b7e858-81e7-42a0-8766-ac9e46d7da80 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:01:40.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-113" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":85,"skipped":1410,"failed":0} SS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:01:40.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service multi-endpoint-test in namespace services-3160 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3160 to expose endpoints map[] Apr 2 00:01:40.178: INFO: Get endpoints failed (3.475885ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Apr 2 00:01:41.182: INFO: successfully validated that service multi-endpoint-test in namespace services-3160 exposes endpoints map[] (1.007465279s elapsed) STEP: Creating pod pod1 in namespace services-3160 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3160 to expose endpoints map[pod1:[100]] Apr 2 00:01:44.235: INFO: successfully validated that service multi-endpoint-test in namespace services-3160 exposes endpoints map[pod1:[100]] (3.046980269s elapsed) STEP: Creating pod pod2 in namespace services-3160 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3160 to expose endpoints map[pod1:[100] pod2:[101]] Apr 2 00:01:47.318: INFO: successfully validated that service multi-endpoint-test in namespace services-3160 exposes endpoints map[pod1:[100] pod2:[101]] (3.079298635s elapsed) STEP: Deleting pod pod1 in namespace services-3160 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3160 to expose endpoints map[pod2:[101]] Apr 2 00:01:48.377: INFO: successfully validated that service multi-endpoint-test in namespace services-3160 exposes endpoints map[pod2:[101]] (1.054822545s elapsed) STEP: Deleting pod pod2 in namespace services-3160 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3160 to expose endpoints map[] Apr 2 00:01:48.391: INFO: successfully validated that service multi-endpoint-test in namespace services-3160 exposes endpoints map[] (5.580736ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:01:48.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3160" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:8.417 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":275,"completed":86,"skipped":1412,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:01:48.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 00:01:48.501: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:01:52.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5478" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":87,"skipped":1422,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:01:52.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-ed6d3126-b76c-4064-a161-6a649b5b223a in namespace container-probe-7362 Apr 2 00:01:56.736: INFO: Started pod busybox-ed6d3126-b76c-4064-a161-6a649b5b223a in namespace container-probe-7362 STEP: checking the pod's current state and verifying that restartCount is present Apr 2 00:01:56.739: INFO: Initial restart count of pod busybox-ed6d3126-b76c-4064-a161-6a649b5b223a is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:05:57.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7362" for this suite. • [SLOW TEST:244.712 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":88,"skipped":1488,"failed":0} S ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:05:57.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3889.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-3889.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3889.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3889.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-3889.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3889.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 2 00:06:03.522: INFO: DNS probes using dns-3889/dns-test-ee00bf9f-afd2-44cc-9f6a-40bb37e1495f succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:06:03.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3889" for this suite. • [SLOW TEST:6.500 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":89,"skipped":1489,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:06:03.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 2 00:06:08.624: INFO: Successfully updated pod "annotationupdate18bed9f4-14c3-4d3c-9898-3cdfeb5b49f8" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:06:10.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1945" for this suite. • [SLOW TEST:6.770 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":90,"skipped":1491,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:06:10.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 2 00:06:15.230: INFO: Successfully updated pod "pod-update-activedeadlineseconds-e0440f83-6544-4ce3-bcf0-6d6e23da7e4c" Apr 2 00:06:15.230: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-e0440f83-6544-4ce3-bcf0-6d6e23da7e4c" in namespace "pods-3941" to be "terminated due to deadline exceeded" Apr 2 00:06:15.238: INFO: Pod "pod-update-activedeadlineseconds-e0440f83-6544-4ce3-bcf0-6d6e23da7e4c": Phase="Running", Reason="", readiness=true. Elapsed: 7.571504ms Apr 2 00:06:17.242: INFO: Pod "pod-update-activedeadlineseconds-e0440f83-6544-4ce3-bcf0-6d6e23da7e4c": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.011873344s Apr 2 00:06:17.242: INFO: Pod "pod-update-activedeadlineseconds-e0440f83-6544-4ce3-bcf0-6d6e23da7e4c" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:06:17.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3941" for this suite. • [SLOW TEST:6.604 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":91,"skipped":1503,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:06:17.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-2396 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 2 00:06:17.301: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 2 00:06:17.381: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 2 00:06:19.465: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 2 00:06:21.383: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 2 00:06:23.385: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 2 00:06:25.386: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 2 00:06:27.385: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 2 00:06:29.385: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 2 00:06:31.385: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 2 00:06:31.391: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 2 00:06:33.394: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 2 00:06:37.419: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.72:8080/dial?request=hostname&protocol=udp&host=10.244.2.211&port=8081&tries=1'] Namespace:pod-network-test-2396 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 00:06:37.419: INFO: >>> kubeConfig: /root/.kube/config I0402 00:06:37.457325 7 log.go:172] (0xc00288a790) (0xc0002c59a0) Create stream I0402 00:06:37.457356 7 log.go:172] (0xc00288a790) (0xc0002c59a0) Stream added, broadcasting: 1 I0402 00:06:37.459974 7 log.go:172] (0xc00288a790) Reply frame received for 1 I0402 00:06:37.460039 7 log.go:172] (0xc00288a790) (0xc001408140) Create stream I0402 00:06:37.460124 7 log.go:172] (0xc00288a790) (0xc001408140) Stream added, broadcasting: 3 I0402 00:06:37.461447 7 log.go:172] (0xc00288a790) Reply frame received for 3 I0402 00:06:37.461484 7 log.go:172] (0xc00288a790) (0xc0014081e0) Create stream I0402 00:06:37.461497 7 log.go:172] (0xc00288a790) (0xc0014081e0) Stream added, broadcasting: 5 I0402 00:06:37.462431 7 log.go:172] (0xc00288a790) Reply frame received for 5 I0402 00:06:37.538200 7 log.go:172] (0xc00288a790) Data frame received for 3 I0402 00:06:37.538235 7 log.go:172] (0xc001408140) (3) Data frame handling I0402 00:06:37.538261 7 log.go:172] (0xc001408140) (3) Data frame sent I0402 00:06:37.538953 7 log.go:172] (0xc00288a790) Data frame received for 3 I0402 00:06:37.538975 7 log.go:172] (0xc001408140) (3) Data frame handling I0402 00:06:37.539012 7 log.go:172] (0xc00288a790) Data frame received for 5 I0402 00:06:37.539036 7 log.go:172] (0xc0014081e0) (5) Data frame handling I0402 00:06:37.540912 7 log.go:172] (0xc00288a790) Data frame received for 1 I0402 00:06:37.540946 7 log.go:172] (0xc0002c59a0) (1) Data frame handling I0402 00:06:37.540973 7 log.go:172] (0xc0002c59a0) (1) Data frame sent I0402 00:06:37.540996 7 log.go:172] (0xc00288a790) (0xc0002c59a0) Stream removed, broadcasting: 1 I0402 00:06:37.541082 7 log.go:172] (0xc00288a790) (0xc0002c59a0) Stream removed, broadcasting: 1 I0402 00:06:37.541098 7 log.go:172] (0xc00288a790) (0xc001408140) Stream removed, broadcasting: 3 I0402 00:06:37.541107 7 log.go:172] (0xc00288a790) (0xc0014081e0) Stream removed, broadcasting: 5 Apr 2 00:06:37.541: INFO: Waiting for responses: map[] I0402 00:06:37.541548 7 log.go:172] (0xc00288a790) Go away received Apr 2 00:06:37.544: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.72:8080/dial?request=hostname&protocol=udp&host=10.244.1.71&port=8081&tries=1'] Namespace:pod-network-test-2396 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 00:06:37.544: INFO: >>> kubeConfig: /root/.kube/config I0402 00:06:37.578359 7 log.go:172] (0xc002e482c0) (0xc001408780) Create stream I0402 00:06:37.578390 7 log.go:172] (0xc002e482c0) (0xc001408780) Stream added, broadcasting: 1 I0402 00:06:37.580514 7 log.go:172] (0xc002e482c0) Reply frame received for 1 I0402 00:06:37.580550 7 log.go:172] (0xc002e482c0) (0xc0015a25a0) Create stream I0402 00:06:37.580560 7 log.go:172] (0xc002e482c0) (0xc0015a25a0) Stream added, broadcasting: 3 I0402 00:06:37.581909 7 log.go:172] (0xc002e482c0) Reply frame received for 3 I0402 00:06:37.581951 7 log.go:172] (0xc002e482c0) (0xc000b2db80) Create stream I0402 00:06:37.581966 7 log.go:172] (0xc002e482c0) (0xc000b2db80) Stream added, broadcasting: 5 I0402 00:06:37.582999 7 log.go:172] (0xc002e482c0) Reply frame received for 5 I0402 00:06:37.657366 7 log.go:172] (0xc002e482c0) Data frame received for 3 I0402 00:06:37.657387 7 log.go:172] (0xc0015a25a0) (3) Data frame handling I0402 00:06:37.657401 7 log.go:172] (0xc0015a25a0) (3) Data frame sent I0402 00:06:37.657996 7 log.go:172] (0xc002e482c0) Data frame received for 5 I0402 00:06:37.658020 7 log.go:172] (0xc000b2db80) (5) Data frame handling I0402 00:06:37.658061 7 log.go:172] (0xc002e482c0) Data frame received for 3 I0402 00:06:37.658075 7 log.go:172] (0xc0015a25a0) (3) Data frame handling I0402 00:06:37.659727 7 log.go:172] (0xc002e482c0) Data frame received for 1 I0402 00:06:37.659776 7 log.go:172] (0xc001408780) (1) Data frame handling I0402 00:06:37.659793 7 log.go:172] (0xc001408780) (1) Data frame sent I0402 00:06:37.659825 7 log.go:172] (0xc002e482c0) (0xc001408780) Stream removed, broadcasting: 1 I0402 00:06:37.659849 7 log.go:172] (0xc002e482c0) Go away received I0402 00:06:37.659978 7 log.go:172] (0xc002e482c0) (0xc001408780) Stream removed, broadcasting: 1 I0402 00:06:37.660021 7 log.go:172] (0xc002e482c0) (0xc0015a25a0) Stream removed, broadcasting: 3 I0402 00:06:37.660040 7 log.go:172] (0xc002e482c0) (0xc000b2db80) Stream removed, broadcasting: 5 Apr 2 00:06:37.660: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:06:37.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2396" for this suite. • [SLOW TEST:20.417 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":92,"skipped":1513,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:06:37.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-158b792c-7c88-4870-a77b-0936fc75be9e STEP: Creating configMap with name cm-test-opt-upd-0d7aac27-9dc3-4b0c-931c-c7d2fec75ab4 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-158b792c-7c88-4870-a77b-0936fc75be9e STEP: Updating configmap cm-test-opt-upd-0d7aac27-9dc3-4b0c-931c-c7d2fec75ab4 STEP: Creating configMap with name cm-test-opt-create-815fa55f-d06b-4e0f-a199-c3772dcdbb4f STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:07:56.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6827" for this suite. • [SLOW TEST:78.516 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":93,"skipped":1556,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:07:56.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-9121 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-9121 I0402 00:07:56.418915 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-9121, replica count: 2 I0402 00:07:59.469603 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0402 00:08:02.469858 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 2 00:08:02.469: INFO: Creating new exec pod Apr 2 00:08:07.612: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-9121 execpodtc8bd -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 2 00:08:07.817: INFO: stderr: "I0402 00:08:07.747463 982 log.go:172] (0xc000af8dc0) (0xc000af0320) Create stream\nI0402 00:08:07.747518 982 log.go:172] (0xc000af8dc0) (0xc000af0320) Stream added, broadcasting: 1\nI0402 00:08:07.749814 982 log.go:172] (0xc000af8dc0) Reply frame received for 1\nI0402 00:08:07.749859 982 log.go:172] (0xc000af8dc0) (0xc0009a6000) Create stream\nI0402 00:08:07.749869 982 log.go:172] (0xc000af8dc0) (0xc0009a6000) Stream added, broadcasting: 3\nI0402 00:08:07.750973 982 log.go:172] (0xc000af8dc0) Reply frame received for 3\nI0402 00:08:07.751070 982 log.go:172] (0xc000af8dc0) (0xc000942000) Create stream\nI0402 00:08:07.751149 982 log.go:172] (0xc000af8dc0) (0xc000942000) Stream added, broadcasting: 5\nI0402 00:08:07.752334 982 log.go:172] (0xc000af8dc0) Reply frame received for 5\nI0402 00:08:07.810143 982 log.go:172] (0xc000af8dc0) Data frame received for 5\nI0402 00:08:07.810165 982 log.go:172] (0xc000942000) (5) Data frame handling\nI0402 00:08:07.810174 982 log.go:172] (0xc000942000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0402 00:08:07.810884 982 log.go:172] (0xc000af8dc0) Data frame received for 5\nI0402 00:08:07.810935 982 log.go:172] (0xc000942000) (5) Data frame handling\nI0402 00:08:07.810980 982 log.go:172] (0xc000942000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0402 00:08:07.811157 982 log.go:172] (0xc000af8dc0) Data frame received for 3\nI0402 00:08:07.811175 982 log.go:172] (0xc0009a6000) (3) Data frame handling\nI0402 00:08:07.811207 982 log.go:172] (0xc000af8dc0) Data frame received for 5\nI0402 00:08:07.811244 982 log.go:172] (0xc000942000) (5) Data frame handling\nI0402 00:08:07.813239 982 log.go:172] (0xc000af8dc0) Data frame received for 1\nI0402 00:08:07.813260 982 log.go:172] (0xc000af0320) (1) Data frame handling\nI0402 00:08:07.813270 982 log.go:172] (0xc000af0320) (1) Data frame sent\nI0402 00:08:07.813285 982 log.go:172] (0xc000af8dc0) (0xc000af0320) Stream removed, broadcasting: 1\nI0402 00:08:07.813327 982 log.go:172] (0xc000af8dc0) Go away received\nI0402 00:08:07.813568 982 log.go:172] (0xc000af8dc0) (0xc000af0320) Stream removed, broadcasting: 1\nI0402 00:08:07.813582 982 log.go:172] (0xc000af8dc0) (0xc0009a6000) Stream removed, broadcasting: 3\nI0402 00:08:07.813589 982 log.go:172] (0xc000af8dc0) (0xc000942000) Stream removed, broadcasting: 5\n" Apr 2 00:08:07.817: INFO: stdout: "" Apr 2 00:08:07.818: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-9121 execpodtc8bd -- /bin/sh -x -c nc -zv -t -w 2 10.96.101.45 80' Apr 2 00:08:08.022: INFO: stderr: "I0402 00:08:07.954047 1005 log.go:172] (0xc000b82d10) (0xc000a66780) Create stream\nI0402 00:08:07.954104 1005 log.go:172] (0xc000b82d10) (0xc000a66780) Stream added, broadcasting: 1\nI0402 00:08:07.958176 1005 log.go:172] (0xc000b82d10) Reply frame received for 1\nI0402 00:08:07.958218 1005 log.go:172] (0xc000b82d10) (0xc000a06000) Create stream\nI0402 00:08:07.958229 1005 log.go:172] (0xc000b82d10) (0xc000a06000) Stream added, broadcasting: 3\nI0402 00:08:07.959118 1005 log.go:172] (0xc000b82d10) Reply frame received for 3\nI0402 00:08:07.959165 1005 log.go:172] (0xc000b82d10) (0xc00064b7c0) Create stream\nI0402 00:08:07.959181 1005 log.go:172] (0xc000b82d10) (0xc00064b7c0) Stream added, broadcasting: 5\nI0402 00:08:07.959994 1005 log.go:172] (0xc000b82d10) Reply frame received for 5\nI0402 00:08:08.016114 1005 log.go:172] (0xc000b82d10) Data frame received for 5\nI0402 00:08:08.016152 1005 log.go:172] (0xc00064b7c0) (5) Data frame handling\nI0402 00:08:08.016166 1005 log.go:172] (0xc00064b7c0) (5) Data frame sent\nI0402 00:08:08.016182 1005 log.go:172] (0xc000b82d10) Data frame received for 5\nI0402 00:08:08.016197 1005 log.go:172] (0xc00064b7c0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.101.45 80\nConnection to 10.96.101.45 80 port [tcp/http] succeeded!\nI0402 00:08:08.016230 1005 log.go:172] (0xc000b82d10) Data frame received for 3\nI0402 00:08:08.016241 1005 log.go:172] (0xc000a06000) (3) Data frame handling\nI0402 00:08:08.017861 1005 log.go:172] (0xc000b82d10) Data frame received for 1\nI0402 00:08:08.017907 1005 log.go:172] (0xc000a66780) (1) Data frame handling\nI0402 00:08:08.017941 1005 log.go:172] (0xc000a66780) (1) Data frame sent\nI0402 00:08:08.017978 1005 log.go:172] (0xc000b82d10) (0xc000a66780) Stream removed, broadcasting: 1\nI0402 00:08:08.018102 1005 log.go:172] (0xc000b82d10) Go away received\nI0402 00:08:08.018442 1005 log.go:172] (0xc000b82d10) (0xc000a66780) Stream removed, broadcasting: 1\nI0402 00:08:08.018465 1005 log.go:172] (0xc000b82d10) (0xc000a06000) Stream removed, broadcasting: 3\nI0402 00:08:08.018478 1005 log.go:172] (0xc000b82d10) (0xc00064b7c0) Stream removed, broadcasting: 5\n" Apr 2 00:08:08.022: INFO: stdout: "" Apr 2 00:08:08.023: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-9121 execpodtc8bd -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30519' Apr 2 00:08:08.219: INFO: stderr: "I0402 00:08:08.152602 1028 log.go:172] (0xc000b19080) (0xc00097eb40) Create stream\nI0402 00:08:08.152686 1028 log.go:172] (0xc000b19080) (0xc00097eb40) Stream added, broadcasting: 1\nI0402 00:08:08.158401 1028 log.go:172] (0xc000b19080) Reply frame received for 1\nI0402 00:08:08.158443 1028 log.go:172] (0xc000b19080) (0xc0005fb540) Create stream\nI0402 00:08:08.158458 1028 log.go:172] (0xc000b19080) (0xc0005fb540) Stream added, broadcasting: 3\nI0402 00:08:08.159417 1028 log.go:172] (0xc000b19080) Reply frame received for 3\nI0402 00:08:08.159474 1028 log.go:172] (0xc000b19080) (0xc00051e960) Create stream\nI0402 00:08:08.159494 1028 log.go:172] (0xc000b19080) (0xc00051e960) Stream added, broadcasting: 5\nI0402 00:08:08.160479 1028 log.go:172] (0xc000b19080) Reply frame received for 5\nI0402 00:08:08.214234 1028 log.go:172] (0xc000b19080) Data frame received for 5\nI0402 00:08:08.214279 1028 log.go:172] (0xc00051e960) (5) Data frame handling\nI0402 00:08:08.214297 1028 log.go:172] (0xc00051e960) (5) Data frame sent\nI0402 00:08:08.214311 1028 log.go:172] (0xc000b19080) Data frame received for 5\nI0402 00:08:08.214321 1028 log.go:172] (0xc00051e960) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 30519\nConnection to 172.17.0.13 30519 port [tcp/30519] succeeded!\nI0402 00:08:08.214362 1028 log.go:172] (0xc000b19080) Data frame received for 3\nI0402 00:08:08.214387 1028 log.go:172] (0xc0005fb540) (3) Data frame handling\nI0402 00:08:08.215633 1028 log.go:172] (0xc000b19080) Data frame received for 1\nI0402 00:08:08.215653 1028 log.go:172] (0xc00097eb40) (1) Data frame handling\nI0402 00:08:08.215670 1028 log.go:172] (0xc00097eb40) (1) Data frame sent\nI0402 00:08:08.215685 1028 log.go:172] (0xc000b19080) (0xc00097eb40) Stream removed, broadcasting: 1\nI0402 00:08:08.215874 1028 log.go:172] (0xc000b19080) Go away received\nI0402 00:08:08.216008 1028 log.go:172] (0xc000b19080) (0xc00097eb40) Stream removed, broadcasting: 1\nI0402 00:08:08.216025 1028 log.go:172] (0xc000b19080) (0xc0005fb540) Stream removed, broadcasting: 3\nI0402 00:08:08.216032 1028 log.go:172] (0xc000b19080) (0xc00051e960) Stream removed, broadcasting: 5\n" Apr 2 00:08:08.219: INFO: stdout: "" Apr 2 00:08:08.220: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-9121 execpodtc8bd -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30519' Apr 2 00:08:08.428: INFO: stderr: "I0402 00:08:08.344443 1049 log.go:172] (0xc00003aa50) (0xc0007d8280) Create stream\nI0402 00:08:08.344497 1049 log.go:172] (0xc00003aa50) (0xc0007d8280) Stream added, broadcasting: 1\nI0402 00:08:08.347576 1049 log.go:172] (0xc00003aa50) Reply frame received for 1\nI0402 00:08:08.347629 1049 log.go:172] (0xc00003aa50) (0xc000a94000) Create stream\nI0402 00:08:08.347648 1049 log.go:172] (0xc00003aa50) (0xc000a94000) Stream added, broadcasting: 3\nI0402 00:08:08.348690 1049 log.go:172] (0xc00003aa50) Reply frame received for 3\nI0402 00:08:08.348735 1049 log.go:172] (0xc00003aa50) (0xc0007d8320) Create stream\nI0402 00:08:08.348751 1049 log.go:172] (0xc00003aa50) (0xc0007d8320) Stream added, broadcasting: 5\nI0402 00:08:08.349969 1049 log.go:172] (0xc00003aa50) Reply frame received for 5\nI0402 00:08:08.420325 1049 log.go:172] (0xc00003aa50) Data frame received for 5\nI0402 00:08:08.420372 1049 log.go:172] (0xc0007d8320) (5) Data frame handling\nI0402 00:08:08.420416 1049 log.go:172] (0xc0007d8320) (5) Data frame sent\nI0402 00:08:08.420429 1049 log.go:172] (0xc00003aa50) Data frame received for 5\nI0402 00:08:08.420440 1049 log.go:172] (0xc0007d8320) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30519\nConnection to 172.17.0.12 30519 port [tcp/30519] succeeded!\nI0402 00:08:08.420612 1049 log.go:172] (0xc00003aa50) Data frame received for 3\nI0402 00:08:08.420640 1049 log.go:172] (0xc000a94000) (3) Data frame handling\nI0402 00:08:08.422837 1049 log.go:172] (0xc00003aa50) Data frame received for 1\nI0402 00:08:08.422863 1049 log.go:172] (0xc0007d8280) (1) Data frame handling\nI0402 00:08:08.422881 1049 log.go:172] (0xc0007d8280) (1) Data frame sent\nI0402 00:08:08.422899 1049 log.go:172] (0xc00003aa50) (0xc0007d8280) Stream removed, broadcasting: 1\nI0402 00:08:08.423065 1049 log.go:172] (0xc00003aa50) Go away received\nI0402 00:08:08.423296 1049 log.go:172] (0xc00003aa50) (0xc0007d8280) Stream removed, broadcasting: 1\nI0402 00:08:08.423316 1049 log.go:172] (0xc00003aa50) (0xc000a94000) Stream removed, broadcasting: 3\nI0402 00:08:08.423333 1049 log.go:172] (0xc00003aa50) (0xc0007d8320) Stream removed, broadcasting: 5\n" Apr 2 00:08:08.428: INFO: stdout: "" Apr 2 00:08:08.428: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:08:08.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9121" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:12.309 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":94,"skipped":1620,"failed":0} S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:08:08.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 2 00:08:16.672: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 2 00:08:16.705: INFO: Pod pod-with-prestop-exec-hook still exists Apr 2 00:08:18.706: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 2 00:08:18.710: INFO: Pod pod-with-prestop-exec-hook still exists Apr 2 00:08:20.706: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 2 00:08:20.709: INFO: Pod pod-with-prestop-exec-hook still exists Apr 2 00:08:22.706: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 2 00:08:22.708: INFO: Pod pod-with-prestop-exec-hook still exists Apr 2 00:08:24.706: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 2 00:08:24.709: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:08:24.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6917" for this suite. • [SLOW TEST:16.244 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":95,"skipped":1621,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:08:24.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Apr 2 00:08:24.807: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:08:40.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4678" for this suite. • [SLOW TEST:15.912 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":96,"skipped":1629,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:08:40.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 2 00:08:40.761: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-373 /api/v1/namespaces/watch-373/configmaps/e2e-watch-test-label-changed 5a87a0b4-a582-492e-af7b-711e609c48d2 4666534 0 2020-04-02 00:08:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 2 00:08:40.761: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-373 /api/v1/namespaces/watch-373/configmaps/e2e-watch-test-label-changed 5a87a0b4-a582-492e-af7b-711e609c48d2 4666535 0 2020-04-02 00:08:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 2 00:08:40.761: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-373 /api/v1/namespaces/watch-373/configmaps/e2e-watch-test-label-changed 5a87a0b4-a582-492e-af7b-711e609c48d2 4666536 0 2020-04-02 00:08:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 2 00:08:50.818: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-373 /api/v1/namespaces/watch-373/configmaps/e2e-watch-test-label-changed 5a87a0b4-a582-492e-af7b-711e609c48d2 4666574 0 2020-04-02 00:08:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 2 00:08:50.818: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-373 /api/v1/namespaces/watch-373/configmaps/e2e-watch-test-label-changed 5a87a0b4-a582-492e-af7b-711e609c48d2 4666575 0 2020-04-02 00:08:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 2 00:08:50.818: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-373 /api/v1/namespaces/watch-373/configmaps/e2e-watch-test-label-changed 5a87a0b4-a582-492e-af7b-711e609c48d2 4666576 0 2020-04-02 00:08:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:08:50.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-373" for this suite. • [SLOW TEST:10.186 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":97,"skipped":1642,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:08:50.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 2 00:08:50.913: INFO: Waiting up to 5m0s for pod "downwardapi-volume-67095372-5bd8-409a-a7b1-d5dad5329b24" in namespace "projected-2915" to be "Succeeded or Failed" Apr 2 00:08:50.917: INFO: Pod "downwardapi-volume-67095372-5bd8-409a-a7b1-d5dad5329b24": Phase="Pending", Reason="", readiness=false. Elapsed: 3.825444ms Apr 2 00:08:52.922: INFO: Pod "downwardapi-volume-67095372-5bd8-409a-a7b1-d5dad5329b24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008287022s Apr 2 00:08:54.926: INFO: Pod "downwardapi-volume-67095372-5bd8-409a-a7b1-d5dad5329b24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012709959s STEP: Saw pod success Apr 2 00:08:54.926: INFO: Pod "downwardapi-volume-67095372-5bd8-409a-a7b1-d5dad5329b24" satisfied condition "Succeeded or Failed" Apr 2 00:08:54.929: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-67095372-5bd8-409a-a7b1-d5dad5329b24 container client-container: STEP: delete the pod Apr 2 00:08:54.961: INFO: Waiting for pod downwardapi-volume-67095372-5bd8-409a-a7b1-d5dad5329b24 to disappear Apr 2 00:08:54.981: INFO: Pod downwardapi-volume-67095372-5bd8-409a-a7b1-d5dad5329b24 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:08:54.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2915" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":98,"skipped":1661,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:08:54.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Apr 2 00:08:55.067: INFO: Created pod &Pod{ObjectMeta:{dns-2978 dns-2978 /api/v1/namespaces/dns-2978/pods/dns-2978 71f8863e-ea47-4a6b-ba4e-10eb8ac71c88 4666610 0 2020-04-02 00:08:55 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wk768,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wk768,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wk768,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 00:08:55.071: INFO: The status of Pod dns-2978 is Pending, waiting for it to be Running (with Ready = true) Apr 2 00:08:57.120: INFO: The status of Pod dns-2978 is Pending, waiting for it to be Running (with Ready = true) Apr 2 00:08:59.078: INFO: The status of Pod dns-2978 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Apr 2 00:08:59.078: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-2978 PodName:dns-2978 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 00:08:59.078: INFO: >>> kubeConfig: /root/.kube/config I0402 00:08:59.112187 7 log.go:172] (0xc004ba24d0) (0xc000c68320) Create stream I0402 00:08:59.112217 7 log.go:172] (0xc004ba24d0) (0xc000c68320) Stream added, broadcasting: 1 I0402 00:08:59.114253 7 log.go:172] (0xc004ba24d0) Reply frame received for 1 I0402 00:08:59.114305 7 log.go:172] (0xc004ba24d0) (0xc000395360) Create stream I0402 00:08:59.114319 7 log.go:172] (0xc004ba24d0) (0xc000395360) Stream added, broadcasting: 3 I0402 00:08:59.115311 7 log.go:172] (0xc004ba24d0) Reply frame received for 3 I0402 00:08:59.115349 7 log.go:172] (0xc004ba24d0) (0xc000b2dea0) Create stream I0402 00:08:59.115360 7 log.go:172] (0xc004ba24d0) (0xc000b2dea0) Stream added, broadcasting: 5 I0402 00:08:59.116254 7 log.go:172] (0xc004ba24d0) Reply frame received for 5 I0402 00:08:59.200698 7 log.go:172] (0xc004ba24d0) Data frame received for 3 I0402 00:08:59.200742 7 log.go:172] (0xc000395360) (3) Data frame handling I0402 00:08:59.200771 7 log.go:172] (0xc000395360) (3) Data frame sent I0402 00:08:59.201053 7 log.go:172] (0xc004ba24d0) Data frame received for 3 I0402 00:08:59.201070 7 log.go:172] (0xc000395360) (3) Data frame handling I0402 00:08:59.201384 7 log.go:172] (0xc004ba24d0) Data frame received for 5 I0402 00:08:59.201415 7 log.go:172] (0xc000b2dea0) (5) Data frame handling I0402 00:08:59.203294 7 log.go:172] (0xc004ba24d0) Data frame received for 1 I0402 00:08:59.203307 7 log.go:172] (0xc000c68320) (1) Data frame handling I0402 00:08:59.203318 7 log.go:172] (0xc000c68320) (1) Data frame sent I0402 00:08:59.203328 7 log.go:172] (0xc004ba24d0) (0xc000c68320) Stream removed, broadcasting: 1 I0402 00:08:59.203410 7 log.go:172] (0xc004ba24d0) (0xc000c68320) Stream removed, broadcasting: 1 I0402 00:08:59.203419 7 log.go:172] (0xc004ba24d0) (0xc000395360) Stream removed, broadcasting: 3 I0402 00:08:59.203508 7 log.go:172] (0xc004ba24d0) (0xc000b2dea0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... I0402 00:08:59.203659 7 log.go:172] (0xc004ba24d0) Go away received Apr 2 00:08:59.203: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-2978 PodName:dns-2978 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 00:08:59.203: INFO: >>> kubeConfig: /root/.kube/config I0402 00:08:59.238115 7 log.go:172] (0xc004ba2b00) (0xc000c68780) Create stream I0402 00:08:59.238156 7 log.go:172] (0xc004ba2b00) (0xc000c68780) Stream added, broadcasting: 1 I0402 00:08:59.240150 7 log.go:172] (0xc004ba2b00) Reply frame received for 1 I0402 00:08:59.240210 7 log.go:172] (0xc004ba2b00) (0xc0015de0a0) Create stream I0402 00:08:59.240224 7 log.go:172] (0xc004ba2b00) (0xc0015de0a0) Stream added, broadcasting: 3 I0402 00:08:59.241255 7 log.go:172] (0xc004ba2b00) Reply frame received for 3 I0402 00:08:59.241295 7 log.go:172] (0xc004ba2b00) (0xc0011720a0) Create stream I0402 00:08:59.241307 7 log.go:172] (0xc004ba2b00) (0xc0011720a0) Stream added, broadcasting: 5 I0402 00:08:59.242270 7 log.go:172] (0xc004ba2b00) Reply frame received for 5 I0402 00:08:59.304128 7 log.go:172] (0xc004ba2b00) Data frame received for 3 I0402 00:08:59.304161 7 log.go:172] (0xc0015de0a0) (3) Data frame handling I0402 00:08:59.304177 7 log.go:172] (0xc0015de0a0) (3) Data frame sent I0402 00:08:59.304975 7 log.go:172] (0xc004ba2b00) Data frame received for 5 I0402 00:08:59.304996 7 log.go:172] (0xc0011720a0) (5) Data frame handling I0402 00:08:59.305017 7 log.go:172] (0xc004ba2b00) Data frame received for 3 I0402 00:08:59.305031 7 log.go:172] (0xc0015de0a0) (3) Data frame handling I0402 00:08:59.306966 7 log.go:172] (0xc004ba2b00) Data frame received for 1 I0402 00:08:59.306987 7 log.go:172] (0xc000c68780) (1) Data frame handling I0402 00:08:59.306999 7 log.go:172] (0xc000c68780) (1) Data frame sent I0402 00:08:59.307011 7 log.go:172] (0xc004ba2b00) (0xc000c68780) Stream removed, broadcasting: 1 I0402 00:08:59.307024 7 log.go:172] (0xc004ba2b00) Go away received I0402 00:08:59.307231 7 log.go:172] (0xc004ba2b00) (0xc000c68780) Stream removed, broadcasting: 1 I0402 00:08:59.307263 7 log.go:172] (0xc004ba2b00) (0xc0015de0a0) Stream removed, broadcasting: 3 I0402 00:08:59.307287 7 log.go:172] (0xc004ba2b00) (0xc0011720a0) Stream removed, broadcasting: 5 Apr 2 00:08:59.307: INFO: Deleting pod dns-2978... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:08:59.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2978" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":99,"skipped":1672,"failed":0} SSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:08:59.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:08:59.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6699" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":100,"skipped":1678,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:08:59.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 2 00:09:00.229: INFO: Waiting up to 5m0s for pod "downwardapi-volume-37c5ba44-1c18-4044-8c26-686d1775acb6" in namespace "projected-6133" to be "Succeeded or Failed" Apr 2 00:09:00.282: INFO: Pod "downwardapi-volume-37c5ba44-1c18-4044-8c26-686d1775acb6": Phase="Pending", Reason="", readiness=false. Elapsed: 52.642894ms Apr 2 00:09:02.286: INFO: Pod "downwardapi-volume-37c5ba44-1c18-4044-8c26-686d1775acb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056582534s Apr 2 00:09:04.289: INFO: Pod "downwardapi-volume-37c5ba44-1c18-4044-8c26-686d1775acb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060242943s STEP: Saw pod success Apr 2 00:09:04.290: INFO: Pod "downwardapi-volume-37c5ba44-1c18-4044-8c26-686d1775acb6" satisfied condition "Succeeded or Failed" Apr 2 00:09:04.292: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-37c5ba44-1c18-4044-8c26-686d1775acb6 container client-container: STEP: delete the pod Apr 2 00:09:04.326: INFO: Waiting for pod downwardapi-volume-37c5ba44-1c18-4044-8c26-686d1775acb6 to disappear Apr 2 00:09:04.336: INFO: Pod downwardapi-volume-37c5ba44-1c18-4044-8c26-686d1775acb6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:09:04.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6133" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":101,"skipped":1724,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:09:04.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-b2726d8d-f7af-4e3d-9479-cea509eed3af STEP: Creating a pod to test consume configMaps Apr 2 00:09:04.428: INFO: Waiting up to 5m0s for pod "pod-configmaps-4ad122ae-cc0d-4aa2-9953-3d56dd58bee7" in namespace "configmap-874" to be "Succeeded or Failed" Apr 2 00:09:04.438: INFO: Pod "pod-configmaps-4ad122ae-cc0d-4aa2-9953-3d56dd58bee7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.117956ms Apr 2 00:09:06.442: INFO: Pod "pod-configmaps-4ad122ae-cc0d-4aa2-9953-3d56dd58bee7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014148177s Apr 2 00:09:08.447: INFO: Pod "pod-configmaps-4ad122ae-cc0d-4aa2-9953-3d56dd58bee7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018747584s STEP: Saw pod success Apr 2 00:09:08.447: INFO: Pod "pod-configmaps-4ad122ae-cc0d-4aa2-9953-3d56dd58bee7" satisfied condition "Succeeded or Failed" Apr 2 00:09:08.450: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-4ad122ae-cc0d-4aa2-9953-3d56dd58bee7 container configmap-volume-test: STEP: delete the pod Apr 2 00:09:08.485: INFO: Waiting for pod pod-configmaps-4ad122ae-cc0d-4aa2-9953-3d56dd58bee7 to disappear Apr 2 00:09:08.492: INFO: Pod pod-configmaps-4ad122ae-cc0d-4aa2-9953-3d56dd58bee7 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:09:08.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-874" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":102,"skipped":1739,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:09:08.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 00:09:08.581: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 2 00:09:10.500: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-558 create -f -' Apr 2 00:09:13.492: INFO: stderr: "" Apr 2 00:09:13.492: INFO: stdout: "e2e-test-crd-publish-openapi-1037-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 2 00:09:13.492: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-558 delete e2e-test-crd-publish-openapi-1037-crds test-cr' Apr 2 00:09:13.597: INFO: stderr: "" Apr 2 00:09:13.597: INFO: stdout: "e2e-test-crd-publish-openapi-1037-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Apr 2 00:09:13.597: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-558 apply -f -' Apr 2 00:09:13.853: INFO: stderr: "" Apr 2 00:09:13.853: INFO: stdout: "e2e-test-crd-publish-openapi-1037-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 2 00:09:13.853: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-558 delete e2e-test-crd-publish-openapi-1037-crds test-cr' Apr 2 00:09:13.974: INFO: stderr: "" Apr 2 00:09:13.974: INFO: stdout: "e2e-test-crd-publish-openapi-1037-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 2 00:09:13.974: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1037-crds' Apr 2 00:09:14.203: INFO: stderr: "" Apr 2 00:09:14.203: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1037-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:09:17.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-558" for this suite. • [SLOW TEST:8.632 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":103,"skipped":1744,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:09:17.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 00:09:17.175: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3840' Apr 2 00:09:17.466: INFO: stderr: "" Apr 2 00:09:17.466: INFO: stdout: "replicationcontroller/agnhost-master created\n" Apr 2 00:09:17.466: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3840' Apr 2 00:09:17.733: INFO: stderr: "" Apr 2 00:09:17.734: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 2 00:09:18.738: INFO: Selector matched 1 pods for map[app:agnhost] Apr 2 00:09:18.738: INFO: Found 0 / 1 Apr 2 00:09:19.738: INFO: Selector matched 1 pods for map[app:agnhost] Apr 2 00:09:19.738: INFO: Found 0 / 1 Apr 2 00:09:20.737: INFO: Selector matched 1 pods for map[app:agnhost] Apr 2 00:09:20.737: INFO: Found 1 / 1 Apr 2 00:09:20.737: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 2 00:09:20.740: INFO: Selector matched 1 pods for map[app:agnhost] Apr 2 00:09:20.740: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 2 00:09:20.740: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe pod agnhost-master-2krn7 --namespace=kubectl-3840' Apr 2 00:09:20.847: INFO: stderr: "" Apr 2 00:09:20.847: INFO: stdout: "Name: agnhost-master-2krn7\nNamespace: kubectl-3840\nPriority: 0\nNode: latest-worker/172.17.0.13\nStart Time: Thu, 02 Apr 2020 00:09:17 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.217\nIPs:\n IP: 10.244.2.217\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://3f5fca63428a3fd973594ce35578d6cd969aa909d8a02bf4ea693cb492f84981\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 02 Apr 2020 00:09:19 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-cs286 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-cs286:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-cs286\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-3840/agnhost-master-2krn7 to latest-worker\n Normal Pulled 2s kubelet, latest-worker Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n Normal Created 1s kubelet, latest-worker Created container agnhost-master\n Normal Started 1s kubelet, latest-worker Started container agnhost-master\n" Apr 2 00:09:20.847: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-3840' Apr 2 00:09:20.953: INFO: stderr: "" Apr 2 00:09:20.953: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-3840\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-master-2krn7\n" Apr 2 00:09:20.953: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-3840' Apr 2 00:09:21.056: INFO: stderr: "" Apr 2 00:09:21.056: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-3840\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.60.107\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.217:6379\nSession Affinity: None\nEvents: \n" Apr 2 00:09:21.059: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe node latest-control-plane' Apr 2 00:09:21.178: INFO: stderr: "" Apr 2 00:09:21.178: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:27:32 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Thu, 02 Apr 2020 00:09:20 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Thu, 02 Apr 2020 00:08:56 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 02 Apr 2020 00:08:56 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 02 Apr 2020 00:08:56 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 02 Apr 2020 00:08:56 +0000 Sun, 15 Mar 2020 18:28:05 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 96fd1b5d260b433d8f617f455164eb5a\n System UUID: 611bedf3-8581-4e6e-a43b-01a437bb59ad\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.17.0\n Kube-Proxy Version: v1.17.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-f7wtl 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 17d\n kube-system coredns-6955765f44-lq4t7 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 17d\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 17d\n kube-system kindnet-sx5s7 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 17d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 17d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 17d\n kube-system kube-proxy-jpqvf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 17d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 17d\n local-path-storage local-path-provisioner-7745554f7f-fmsmz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 17d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Apr 2 00:09:21.179: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe namespace kubectl-3840' Apr 2 00:09:21.286: INFO: stderr: "" Apr 2 00:09:21.286: INFO: stdout: "Name: kubectl-3840\nLabels: e2e-framework=kubectl\n e2e-run=d5ae8c2d-0969-4d03-a25c-96aa92f0f517\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:09:21.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3840" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":275,"completed":104,"skipped":1779,"failed":0} S ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:09:21.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-6f1dfea9-c5db-4cee-a90e-85a11574ae6b STEP: Creating the pod STEP: Updating configmap configmap-test-upd-6f1dfea9-c5db-4cee-a90e-85a11574ae6b STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:10:43.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1425" for this suite. • [SLOW TEST:82.566 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":105,"skipped":1780,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:10:43.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 2 00:10:43.985: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d7c3b1aa-5b57-4248-abf9-a0723b877b8c" in namespace "downward-api-5868" to be "Succeeded or Failed" Apr 2 00:10:43.991: INFO: Pod "downwardapi-volume-d7c3b1aa-5b57-4248-abf9-a0723b877b8c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.249497ms Apr 2 00:10:45.995: INFO: Pod "downwardapi-volume-d7c3b1aa-5b57-4248-abf9-a0723b877b8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009447286s Apr 2 00:10:47.998: INFO: Pod "downwardapi-volume-d7c3b1aa-5b57-4248-abf9-a0723b877b8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012846946s STEP: Saw pod success Apr 2 00:10:47.998: INFO: Pod "downwardapi-volume-d7c3b1aa-5b57-4248-abf9-a0723b877b8c" satisfied condition "Succeeded or Failed" Apr 2 00:10:48.000: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-d7c3b1aa-5b57-4248-abf9-a0723b877b8c container client-container: STEP: delete the pod Apr 2 00:10:48.043: INFO: Waiting for pod downwardapi-volume-d7c3b1aa-5b57-4248-abf9-a0723b877b8c to disappear Apr 2 00:10:48.067: INFO: Pod downwardapi-volume-d7c3b1aa-5b57-4248-abf9-a0723b877b8c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:10:48.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5868" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":106,"skipped":1782,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:10:48.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 2 00:10:48.544: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 2 00:10:50.555: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721383048, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721383048, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721383048, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721383048, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 2 00:10:53.587: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:10:54.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3447" for this suite. STEP: Destroying namespace "webhook-3447-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.022 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":107,"skipped":1809,"failed":0} [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:10:54.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Apr 2 00:10:54.171: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4001' Apr 2 00:10:54.445: INFO: stderr: "" Apr 2 00:10:54.445: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 2 00:10:54.445: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4001' Apr 2 00:10:54.741: INFO: stderr: "" Apr 2 00:10:54.741: INFO: stdout: "update-demo-nautilus-q4ql8 update-demo-nautilus-qzs7d " Apr 2 00:10:54.741: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q4ql8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4001' Apr 2 00:10:54.910: INFO: stderr: "" Apr 2 00:10:54.910: INFO: stdout: "" Apr 2 00:10:54.910: INFO: update-demo-nautilus-q4ql8 is created but not running Apr 2 00:10:59.910: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4001' Apr 2 00:11:00.013: INFO: stderr: "" Apr 2 00:11:00.013: INFO: stdout: "update-demo-nautilus-q4ql8 update-demo-nautilus-qzs7d " Apr 2 00:11:00.013: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q4ql8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4001' Apr 2 00:11:00.110: INFO: stderr: "" Apr 2 00:11:00.110: INFO: stdout: "true" Apr 2 00:11:00.110: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q4ql8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4001' Apr 2 00:11:00.203: INFO: stderr: "" Apr 2 00:11:00.203: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 2 00:11:00.203: INFO: validating pod update-demo-nautilus-q4ql8 Apr 2 00:11:00.225: INFO: got data: { "image": "nautilus.jpg" } Apr 2 00:11:00.225: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 2 00:11:00.225: INFO: update-demo-nautilus-q4ql8 is verified up and running Apr 2 00:11:00.225: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qzs7d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4001' Apr 2 00:11:00.330: INFO: stderr: "" Apr 2 00:11:00.330: INFO: stdout: "true" Apr 2 00:11:00.330: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qzs7d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4001' Apr 2 00:11:00.418: INFO: stderr: "" Apr 2 00:11:00.418: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 2 00:11:00.418: INFO: validating pod update-demo-nautilus-qzs7d Apr 2 00:11:00.422: INFO: got data: { "image": "nautilus.jpg" } Apr 2 00:11:00.422: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 2 00:11:00.422: INFO: update-demo-nautilus-qzs7d is verified up and running STEP: scaling down the replication controller Apr 2 00:11:00.425: INFO: scanned /root for discovery docs: Apr 2 00:11:00.425: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-4001' Apr 2 00:11:01.558: INFO: stderr: "" Apr 2 00:11:01.558: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 2 00:11:01.558: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4001' Apr 2 00:11:01.656: INFO: stderr: "" Apr 2 00:11:01.656: INFO: stdout: "update-demo-nautilus-q4ql8 update-demo-nautilus-qzs7d " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 2 00:11:06.656: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4001' Apr 2 00:11:06.755: INFO: stderr: "" Apr 2 00:11:06.755: INFO: stdout: "update-demo-nautilus-q4ql8 " Apr 2 00:11:06.755: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q4ql8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4001' Apr 2 00:11:06.854: INFO: stderr: "" Apr 2 00:11:06.854: INFO: stdout: "true" Apr 2 00:11:06.855: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q4ql8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4001' Apr 2 00:11:06.951: INFO: stderr: "" Apr 2 00:11:06.951: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 2 00:11:06.951: INFO: validating pod update-demo-nautilus-q4ql8 Apr 2 00:11:06.954: INFO: got data: { "image": "nautilus.jpg" } Apr 2 00:11:06.954: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 2 00:11:06.954: INFO: update-demo-nautilus-q4ql8 is verified up and running STEP: scaling up the replication controller Apr 2 00:11:06.957: INFO: scanned /root for discovery docs: Apr 2 00:11:06.958: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-4001' Apr 2 00:11:08.079: INFO: stderr: "" Apr 2 00:11:08.079: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 2 00:11:08.080: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4001' Apr 2 00:11:08.182: INFO: stderr: "" Apr 2 00:11:08.182: INFO: stdout: "update-demo-nautilus-pw96m update-demo-nautilus-q4ql8 " Apr 2 00:11:08.182: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pw96m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4001' Apr 2 00:11:08.284: INFO: stderr: "" Apr 2 00:11:08.284: INFO: stdout: "" Apr 2 00:11:08.284: INFO: update-demo-nautilus-pw96m is created but not running Apr 2 00:11:13.284: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4001' Apr 2 00:11:13.381: INFO: stderr: "" Apr 2 00:11:13.381: INFO: stdout: "update-demo-nautilus-pw96m update-demo-nautilus-q4ql8 " Apr 2 00:11:13.381: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pw96m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4001' Apr 2 00:11:13.481: INFO: stderr: "" Apr 2 00:11:13.481: INFO: stdout: "true" Apr 2 00:11:13.482: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pw96m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4001' Apr 2 00:11:13.567: INFO: stderr: "" Apr 2 00:11:13.567: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 2 00:11:13.567: INFO: validating pod update-demo-nautilus-pw96m Apr 2 00:11:13.571: INFO: got data: { "image": "nautilus.jpg" } Apr 2 00:11:13.571: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 2 00:11:13.571: INFO: update-demo-nautilus-pw96m is verified up and running Apr 2 00:11:13.571: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q4ql8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4001' Apr 2 00:11:13.668: INFO: stderr: "" Apr 2 00:11:13.668: INFO: stdout: "true" Apr 2 00:11:13.668: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q4ql8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4001' Apr 2 00:11:13.765: INFO: stderr: "" Apr 2 00:11:13.765: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 2 00:11:13.765: INFO: validating pod update-demo-nautilus-q4ql8 Apr 2 00:11:13.768: INFO: got data: { "image": "nautilus.jpg" } Apr 2 00:11:13.768: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 2 00:11:13.768: INFO: update-demo-nautilus-q4ql8 is verified up and running STEP: using delete to clean up resources Apr 2 00:11:13.769: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4001' Apr 2 00:11:13.863: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 2 00:11:13.863: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 2 00:11:13.863: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4001' Apr 2 00:11:13.967: INFO: stderr: "No resources found in kubectl-4001 namespace.\n" Apr 2 00:11:13.967: INFO: stdout: "" Apr 2 00:11:13.967: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4001 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 2 00:11:14.063: INFO: stderr: "" Apr 2 00:11:14.063: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:11:14.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4001" for this suite. • [SLOW TEST:19.970 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":275,"completed":108,"skipped":1809,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:11:14.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's command Apr 2 00:11:14.378: INFO: Waiting up to 5m0s for pod "var-expansion-0eccfc10-c515-4b7e-a3a8-1e8dcf09cd12" in namespace "var-expansion-9090" to be "Succeeded or Failed" Apr 2 00:11:14.381: INFO: Pod "var-expansion-0eccfc10-c515-4b7e-a3a8-1e8dcf09cd12": Phase="Pending", Reason="", readiness=false. Elapsed: 2.910339ms Apr 2 00:11:16.428: INFO: Pod "var-expansion-0eccfc10-c515-4b7e-a3a8-1e8dcf09cd12": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050059611s Apr 2 00:11:18.431: INFO: Pod "var-expansion-0eccfc10-c515-4b7e-a3a8-1e8dcf09cd12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053648848s STEP: Saw pod success Apr 2 00:11:18.432: INFO: Pod "var-expansion-0eccfc10-c515-4b7e-a3a8-1e8dcf09cd12" satisfied condition "Succeeded or Failed" Apr 2 00:11:18.434: INFO: Trying to get logs from node latest-worker pod var-expansion-0eccfc10-c515-4b7e-a3a8-1e8dcf09cd12 container dapi-container: STEP: delete the pod Apr 2 00:11:18.459: INFO: Waiting for pod var-expansion-0eccfc10-c515-4b7e-a3a8-1e8dcf09cd12 to disappear Apr 2 00:11:18.479: INFO: Pod var-expansion-0eccfc10-c515-4b7e-a3a8-1e8dcf09cd12 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:11:18.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9090" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":109,"skipped":1817,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:11:18.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:11:18.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4422" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":110,"skipped":1821,"failed":0} ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:11:18.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:11:18.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5798" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":275,"completed":111,"skipped":1821,"failed":0} SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:11:18.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-5498 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 2 00:11:18.755: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 2 00:11:18.800: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 2 00:11:20.829: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 2 00:11:22.805: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 2 00:11:24.804: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 2 00:11:26.805: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 2 00:11:28.804: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 2 00:11:30.811: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 2 00:11:32.804: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 2 00:11:34.804: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 2 00:11:36.804: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 2 00:11:38.807: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 2 00:11:40.823: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 2 00:11:40.836: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 2 00:11:44.904: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.223 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5498 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 00:11:44.904: INFO: >>> kubeConfig: /root/.kube/config I0402 00:11:44.932183 7 log.go:172] (0xc00288a210) (0xc001173180) Create stream I0402 00:11:44.932229 7 log.go:172] (0xc00288a210) (0xc001173180) Stream added, broadcasting: 1 I0402 00:11:44.933888 7 log.go:172] (0xc00288a210) Reply frame received for 1 I0402 00:11:44.933937 7 log.go:172] (0xc00288a210) (0xc0011732c0) Create stream I0402 00:11:44.933951 7 log.go:172] (0xc00288a210) (0xc0011732c0) Stream added, broadcasting: 3 I0402 00:11:44.934694 7 log.go:172] (0xc00288a210) Reply frame received for 3 I0402 00:11:44.934735 7 log.go:172] (0xc00288a210) (0xc00124a140) Create stream I0402 00:11:44.934748 7 log.go:172] (0xc00288a210) (0xc00124a140) Stream added, broadcasting: 5 I0402 00:11:44.935608 7 log.go:172] (0xc00288a210) Reply frame received for 5 I0402 00:11:45.996797 7 log.go:172] (0xc00288a210) Data frame received for 3 I0402 00:11:45.996856 7 log.go:172] (0xc0011732c0) (3) Data frame handling I0402 00:11:45.996886 7 log.go:172] (0xc0011732c0) (3) Data frame sent I0402 00:11:45.996917 7 log.go:172] (0xc00288a210) Data frame received for 3 I0402 00:11:45.996936 7 log.go:172] (0xc0011732c0) (3) Data frame handling I0402 00:11:45.997317 7 log.go:172] (0xc00288a210) Data frame received for 5 I0402 00:11:45.997357 7 log.go:172] (0xc00124a140) (5) Data frame handling I0402 00:11:46.000004 7 log.go:172] (0xc00288a210) Data frame received for 1 I0402 00:11:46.000028 7 log.go:172] (0xc001173180) (1) Data frame handling I0402 00:11:46.000048 7 log.go:172] (0xc001173180) (1) Data frame sent I0402 00:11:46.000065 7 log.go:172] (0xc00288a210) (0xc001173180) Stream removed, broadcasting: 1 I0402 00:11:46.000099 7 log.go:172] (0xc00288a210) Go away received I0402 00:11:46.000245 7 log.go:172] (0xc00288a210) (0xc001173180) Stream removed, broadcasting: 1 I0402 00:11:46.000278 7 log.go:172] (0xc00288a210) (0xc0011732c0) Stream removed, broadcasting: 3 I0402 00:11:46.000293 7 log.go:172] (0xc00288a210) (0xc00124a140) Stream removed, broadcasting: 5 Apr 2 00:11:46.000: INFO: Found all expected endpoints: [netserver-0] Apr 2 00:11:46.004: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.81 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5498 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 00:11:46.004: INFO: >>> kubeConfig: /root/.kube/config I0402 00:11:46.042081 7 log.go:172] (0xc002e48420) (0xc0009a7e00) Create stream I0402 00:11:46.042139 7 log.go:172] (0xc002e48420) (0xc0009a7e00) Stream added, broadcasting: 1 I0402 00:11:46.050004 7 log.go:172] (0xc002e48420) Reply frame received for 1 I0402 00:11:46.050046 7 log.go:172] (0xc002e48420) (0xc001173360) Create stream I0402 00:11:46.050061 7 log.go:172] (0xc002e48420) (0xc001173360) Stream added, broadcasting: 3 I0402 00:11:46.052772 7 log.go:172] (0xc002e48420) Reply frame received for 3 I0402 00:11:46.052803 7 log.go:172] (0xc002e48420) (0xc000bf81e0) Create stream I0402 00:11:46.052814 7 log.go:172] (0xc002e48420) (0xc000bf81e0) Stream added, broadcasting: 5 I0402 00:11:46.054120 7 log.go:172] (0xc002e48420) Reply frame received for 5 I0402 00:11:47.148200 7 log.go:172] (0xc002e48420) Data frame received for 3 I0402 00:11:47.148231 7 log.go:172] (0xc001173360) (3) Data frame handling I0402 00:11:47.148247 7 log.go:172] (0xc001173360) (3) Data frame sent I0402 00:11:47.148494 7 log.go:172] (0xc002e48420) Data frame received for 5 I0402 00:11:47.148533 7 log.go:172] (0xc000bf81e0) (5) Data frame handling I0402 00:11:47.148722 7 log.go:172] (0xc002e48420) Data frame received for 3 I0402 00:11:47.148755 7 log.go:172] (0xc001173360) (3) Data frame handling I0402 00:11:47.151304 7 log.go:172] (0xc002e48420) Data frame received for 1 I0402 00:11:47.151344 7 log.go:172] (0xc0009a7e00) (1) Data frame handling I0402 00:11:47.151376 7 log.go:172] (0xc0009a7e00) (1) Data frame sent I0402 00:11:47.151419 7 log.go:172] (0xc002e48420) (0xc0009a7e00) Stream removed, broadcasting: 1 I0402 00:11:47.151463 7 log.go:172] (0xc002e48420) Go away received I0402 00:11:47.151592 7 log.go:172] (0xc002e48420) (0xc0009a7e00) Stream removed, broadcasting: 1 I0402 00:11:47.151636 7 log.go:172] (0xc002e48420) (0xc001173360) Stream removed, broadcasting: 3 I0402 00:11:47.151659 7 log.go:172] (0xc002e48420) (0xc000bf81e0) Stream removed, broadcasting: 5 Apr 2 00:11:47.151: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:11:47.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5498" for this suite. • [SLOW TEST:28.467 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":112,"skipped":1823,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:11:47.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 2 00:11:47.245: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:11:47.284: INFO: Number of nodes with available pods: 0 Apr 2 00:11:47.284: INFO: Node latest-worker is running more than one daemon pod Apr 2 00:11:48.288: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:11:48.291: INFO: Number of nodes with available pods: 0 Apr 2 00:11:48.292: INFO: Node latest-worker is running more than one daemon pod Apr 2 00:11:49.289: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:11:49.292: INFO: Number of nodes with available pods: 0 Apr 2 00:11:49.292: INFO: Node latest-worker is running more than one daemon pod Apr 2 00:11:50.288: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:11:50.290: INFO: Number of nodes with available pods: 0 Apr 2 00:11:50.290: INFO: Node latest-worker is running more than one daemon pod Apr 2 00:11:51.289: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:11:51.293: INFO: Number of nodes with available pods: 2 Apr 2 00:11:51.293: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 2 00:11:51.310: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:11:51.313: INFO: Number of nodes with available pods: 1 Apr 2 00:11:51.313: INFO: Node latest-worker2 is running more than one daemon pod Apr 2 00:11:52.344: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:11:52.351: INFO: Number of nodes with available pods: 1 Apr 2 00:11:52.351: INFO: Node latest-worker2 is running more than one daemon pod Apr 2 00:11:53.319: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:11:53.322: INFO: Number of nodes with available pods: 1 Apr 2 00:11:53.322: INFO: Node latest-worker2 is running more than one daemon pod Apr 2 00:11:54.318: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:11:54.321: INFO: Number of nodes with available pods: 1 Apr 2 00:11:54.321: INFO: Node latest-worker2 is running more than one daemon pod Apr 2 00:11:55.317: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:11:55.320: INFO: Number of nodes with available pods: 1 Apr 2 00:11:55.320: INFO: Node latest-worker2 is running more than one daemon pod Apr 2 00:11:56.318: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:11:56.322: INFO: Number of nodes with available pods: 1 Apr 2 00:11:56.322: INFO: Node latest-worker2 is running more than one daemon pod Apr 2 00:11:57.318: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:11:57.321: INFO: Number of nodes with available pods: 1 Apr 2 00:11:57.321: INFO: Node latest-worker2 is running more than one daemon pod Apr 2 00:11:58.318: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:11:58.322: INFO: Number of nodes with available pods: 1 Apr 2 00:11:58.322: INFO: Node latest-worker2 is running more than one daemon pod Apr 2 00:11:59.319: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:11:59.322: INFO: Number of nodes with available pods: 1 Apr 2 00:11:59.322: INFO: Node latest-worker2 is running more than one daemon pod Apr 2 00:12:00.400: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:12:00.404: INFO: Number of nodes with available pods: 1 Apr 2 00:12:00.404: INFO: Node latest-worker2 is running more than one daemon pod Apr 2 00:12:01.318: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:12:01.322: INFO: Number of nodes with available pods: 1 Apr 2 00:12:01.322: INFO: Node latest-worker2 is running more than one daemon pod Apr 2 00:12:02.318: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:12:02.322: INFO: Number of nodes with available pods: 1 Apr 2 00:12:02.322: INFO: Node latest-worker2 is running more than one daemon pod Apr 2 00:12:03.319: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:12:03.323: INFO: Number of nodes with available pods: 1 Apr 2 00:12:03.323: INFO: Node latest-worker2 is running more than one daemon pod Apr 2 00:12:04.317: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:12:04.320: INFO: Number of nodes with available pods: 1 Apr 2 00:12:04.320: INFO: Node latest-worker2 is running more than one daemon pod Apr 2 00:12:05.318: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:12:05.321: INFO: Number of nodes with available pods: 1 Apr 2 00:12:05.321: INFO: Node latest-worker2 is running more than one daemon pod Apr 2 00:12:06.318: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:12:06.321: INFO: Number of nodes with available pods: 2 Apr 2 00:12:06.321: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6052, will wait for the garbage collector to delete the pods Apr 2 00:12:06.383: INFO: Deleting DaemonSet.extensions daemon-set took: 6.844267ms Apr 2 00:12:06.683: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.192236ms Apr 2 00:12:12.991: INFO: Number of nodes with available pods: 0 Apr 2 00:12:12.991: INFO: Number of running nodes: 0, number of available pods: 0 Apr 2 00:12:12.994: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6052/daemonsets","resourceVersion":"4667739"},"items":null} Apr 2 00:12:12.996: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6052/pods","resourceVersion":"4667739"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:12:13.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6052" for this suite. • [SLOW TEST:25.850 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":113,"skipped":1836,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:12:13.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:12:27.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-913" for this suite. • [SLOW TEST:14.095 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":114,"skipped":1859,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:12:27.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 2 00:12:27.971: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 2 00:12:29.982: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721383147, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721383147, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721383148, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721383147, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 2 00:12:33.032: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:12:33.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4203" for this suite. STEP: Destroying namespace "webhook-4203-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.154 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":115,"skipped":1873,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:12:33.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 00:12:33.344: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 2 00:12:36.260: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9585 create -f -' Apr 2 00:12:39.381: INFO: stderr: "" Apr 2 00:12:39.381: INFO: stdout: "e2e-test-crd-publish-openapi-5549-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 2 00:12:39.381: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9585 delete e2e-test-crd-publish-openapi-5549-crds test-cr' Apr 2 00:12:39.494: INFO: stderr: "" Apr 2 00:12:39.494: INFO: stdout: "e2e-test-crd-publish-openapi-5549-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Apr 2 00:12:39.494: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9585 apply -f -' Apr 2 00:12:39.756: INFO: stderr: "" Apr 2 00:12:39.756: INFO: stdout: "e2e-test-crd-publish-openapi-5549-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 2 00:12:39.756: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9585 delete e2e-test-crd-publish-openapi-5549-crds test-cr' Apr 2 00:12:39.865: INFO: stderr: "" Apr 2 00:12:39.865: INFO: stdout: "e2e-test-crd-publish-openapi-5549-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 2 00:12:39.866: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5549-crds' Apr 2 00:12:40.082: INFO: stderr: "" Apr 2 00:12:40.083: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5549-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:12:41.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9585" for this suite. • [SLOW TEST:8.724 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":116,"skipped":1883,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:12:41.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 2 00:12:50.122: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 2 00:12:50.126: INFO: Pod pod-with-poststart-http-hook still exists Apr 2 00:12:52.126: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 2 00:12:52.130: INFO: Pod pod-with-poststart-http-hook still exists Apr 2 00:12:54.126: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 2 00:12:54.130: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:12:54.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9508" for this suite. • [SLOW TEST:12.154 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":117,"skipped":1895,"failed":0} [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:12:54.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Apr 2 00:12:54.193: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Apr 2 00:12:54.237: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Apr 2 00:12:54.237: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Apr 2 00:12:54.265: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Apr 2 00:12:54.265: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Apr 2 00:12:54.294: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Apr 2 00:12:54.294: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Apr 2 00:13:01.447: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:13:01.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-2965" for this suite. • [SLOW TEST:7.349 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":118,"skipped":1895,"failed":0} SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:13:01.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 2 00:13:01.775: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 2 00:13:01.791: INFO: Waiting for terminating namespaces to be deleted... Apr 2 00:13:01.812: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 2 00:13:01.830: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 2 00:13:01.830: INFO: Container kindnet-cni ready: true, restart count 0 Apr 2 00:13:01.830: INFO: pod-no-resources from limitrange-2965 started at 2020-04-02 00:12:54 +0000 UTC (1 container statuses recorded) Apr 2 00:13:01.830: INFO: Container pause ready: true, restart count 0 Apr 2 00:13:01.830: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 2 00:13:01.830: INFO: Container kube-proxy ready: true, restart count 0 Apr 2 00:13:01.830: INFO: pod-partial-resources from limitrange-2965 started at 2020-04-02 00:12:54 +0000 UTC (1 container statuses recorded) Apr 2 00:13:01.830: INFO: Container pause ready: true, restart count 0 Apr 2 00:13:01.830: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 2 00:13:01.835: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 2 00:13:01.835: INFO: Container kube-proxy ready: true, restart count 0 Apr 2 00:13:01.835: INFO: pod-handle-http-request from container-lifecycle-hook-9508 started at 2020-04-02 00:12:42 +0000 UTC (1 container statuses recorded) Apr 2 00:13:01.835: INFO: Container pod-handle-http-request ready: false, restart count 0 Apr 2 00:13:01.835: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 2 00:13:01.835: INFO: Container kindnet-cni ready: true, restart count 0 Apr 2 00:13:01.835: INFO: pfpod2 from limitrange-2965 started at 2020-04-02 00:13:01 +0000 UTC (1 container statuses recorded) Apr 2 00:13:01.835: INFO: Container pause ready: false, restart count 0 Apr 2 00:13:01.835: INFO: pfpod from limitrange-2965 started at 2020-04-02 00:12:56 +0000 UTC (1 container statuses recorded) Apr 2 00:13:01.835: INFO: Container pause ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-37888b97-32fc-4936-a959-a2adb2221cc0 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-37888b97-32fc-4936-a959-a2adb2221cc0 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-37888b97-32fc-4936-a959-a2adb2221cc0 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:13:20.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1016" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:18.575 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":119,"skipped":1897,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:13:20.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 00:13:20.120: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:13:21.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8759" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":275,"completed":120,"skipped":1915,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:13:21.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating pod Apr 2 00:13:25.438: INFO: Pod pod-hostip-e7b74f72-043d-4928-b639-1b90e227ffbe has hostIP: 172.17.0.12 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:13:25.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9403" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":121,"skipped":1929,"failed":0} SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:13:25.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 2 00:13:25.754: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 2 00:13:25.810: INFO: Waiting for terminating namespaces to be deleted... Apr 2 00:13:25.939: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 2 00:13:25.981: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 2 00:13:25.981: INFO: Container kube-proxy ready: true, restart count 0 Apr 2 00:13:25.981: INFO: pod1 from sched-pred-1016 started at 2020-04-02 00:13:05 +0000 UTC (1 container statuses recorded) Apr 2 00:13:25.981: INFO: Container pod1 ready: true, restart count 0 Apr 2 00:13:25.981: INFO: pod3 from sched-pred-1016 started at 2020-04-02 00:13:16 +0000 UTC (1 container statuses recorded) Apr 2 00:13:25.981: INFO: Container pod3 ready: true, restart count 0 Apr 2 00:13:25.981: INFO: pod2 from sched-pred-1016 started at 2020-04-02 00:13:11 +0000 UTC (1 container statuses recorded) Apr 2 00:13:25.981: INFO: Container pod2 ready: true, restart count 0 Apr 2 00:13:25.981: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 2 00:13:25.981: INFO: Container kindnet-cni ready: true, restart count 0 Apr 2 00:13:25.981: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 2 00:13:26.075: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 2 00:13:26.075: INFO: Container kindnet-cni ready: true, restart count 0 Apr 2 00:13:26.075: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 2 00:13:26.075: INFO: Container kube-proxy ready: true, restart count 0 Apr 2 00:13:26.075: INFO: pod-hostip-e7b74f72-043d-4928-b639-1b90e227ffbe from pods-9403 started at 2020-04-02 00:13:21 +0000 UTC (1 container statuses recorded) Apr 2 00:13:26.075: INFO: Container test ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1601d855a194420a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.1601d855a27d78de], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:13:27.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7825" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":275,"completed":122,"skipped":1940,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:13:27.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:13:38.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3332" for this suite. • [SLOW TEST:11.089 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":123,"skipped":1946,"failed":0} SSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:13:38.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 2 00:13:42.316: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:13:42.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4508" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":124,"skipped":1950,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:13:42.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 00:13:42.414: INFO: Creating deployment "webserver-deployment" Apr 2 00:13:42.447: INFO: Waiting for observed generation 1 Apr 2 00:13:44.454: INFO: Waiting for all required pods to come up Apr 2 00:13:44.459: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 2 00:13:52.469: INFO: Waiting for deployment "webserver-deployment" to complete Apr 2 00:13:52.476: INFO: Updating deployment "webserver-deployment" with a non-existent image Apr 2 00:13:52.481: INFO: Updating deployment webserver-deployment Apr 2 00:13:52.481: INFO: Waiting for observed generation 2 Apr 2 00:13:54.510: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 2 00:13:54.513: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 2 00:13:54.516: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 2 00:13:54.523: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 2 00:13:54.523: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 2 00:13:54.525: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 2 00:13:54.529: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Apr 2 00:13:54.529: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Apr 2 00:13:54.535: INFO: Updating deployment webserver-deployment Apr 2 00:13:54.535: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Apr 2 00:13:54.649: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 2 00:13:54.667: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 2 00:13:54.897: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-5893 /apis/apps/v1/namespaces/deployment-5893/deployments/webserver-deployment 2c88d4f0-51e3-44f0-848e-76a89cd6465f 4668747 3 2020-04-02 00:13:42 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002a9d878 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-04-02 00:13:52 +0000 UTC,LastTransitionTime:2020-04-02 00:13:42 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-02 00:13:54 +0000 UTC,LastTransitionTime:2020-04-02 00:13:54 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Apr 2 00:13:54.947: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-5893 /apis/apps/v1/namespaces/deployment-5893/replicasets/webserver-deployment-c7997dcc8 e250296f-6915-4999-9ba7-44b604ffb0e1 4668791 3 2020-04-02 00:13:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 2c88d4f0-51e3-44f0-848e-76a89cd6465f 0xc0028ff287 0xc0028ff288}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0028ff308 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 2 00:13:54.947: INFO: All old ReplicaSets of Deployment "webserver-deployment": Apr 2 00:13:54.947: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-5893 /apis/apps/v1/namespaces/deployment-5893/replicasets/webserver-deployment-595b5b9587 100f77d9-e659-49a9-9605-5a6e93152f1c 4668790 3 2020-04-02 00:13:42 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 2c88d4f0-51e3-44f0-848e-76a89cd6465f 0xc0028ff1c7 0xc0028ff1c8}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0028ff228 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Apr 2 00:13:55.073: INFO: Pod "webserver-deployment-595b5b9587-2dm79" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2dm79 webserver-deployment-595b5b9587- deployment-5893 /api/v1/namespaces/deployment-5893/pods/webserver-deployment-595b5b9587-2dm79 2c9fb975-0abf-43a2-b908-7a00c803cf00 4668789 0 2020-04-02 00:13:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 100f77d9-e659-49a9-9605-5a6e93152f1c 0xc003e901a7 0xc003e901a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4xccx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4xccx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4xccx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-02 00:13:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 00:13:55.073: INFO: Pod "webserver-deployment-595b5b9587-2hp4r" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2hp4r webserver-deployment-595b5b9587- deployment-5893 /api/v1/namespaces/deployment-5893/pods/webserver-deployment-595b5b9587-2hp4r 63e37b25-2842-40f8-88a7-199e5f1e7f20 4668783 0 2020-04-02 00:13:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 100f77d9-e659-49a9-9605-5a6e93152f1c 0xc003e90307 0xc003e90308}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4xccx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4xccx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4xccx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 00:13:55.073: INFO: Pod "webserver-deployment-595b5b9587-48ntt" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-48ntt webserver-deployment-595b5b9587- deployment-5893 /api/v1/namespaces/deployment-5893/pods/webserver-deployment-595b5b9587-48ntt 7c2e4479-4d48-46f2-81ce-63e51ac4ce73 4668609 0 2020-04-02 00:13:42 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 100f77d9-e659-49a9-9605-5a6e93152f1c 0xc003e90427 0xc003e90428}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4xccx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4xccx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4xccx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.94,StartTime:2020-04-02 00:13:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-02 00:13:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6dee9a7e021f51769b425f73dbc6c0e7686659289cc65c6724566515ec0434ec,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.94,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 00:13:55.074: INFO: Pod "webserver-deployment-595b5b9587-4nxzt" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4nxzt webserver-deployment-595b5b9587- deployment-5893 /api/v1/namespaces/deployment-5893/pods/webserver-deployment-595b5b9587-4nxzt 6f80b208-3446-42e8-ae8a-46dfe3b9dafb 4668760 0 2020-04-02 00:13:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 100f77d9-e659-49a9-9605-5a6e93152f1c 0xc003e905a7 0xc003e905a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4xccx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4xccx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4xccx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 00:13:55.074: INFO: Pod "webserver-deployment-595b5b9587-4th6h" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4th6h webserver-deployment-595b5b9587- deployment-5893 /api/v1/namespaces/deployment-5893/pods/webserver-deployment-595b5b9587-4th6h ce37566f-a6bd-44b5-a648-6d7440b576f7 4668766 0 2020-04-02 00:13:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 100f77d9-e659-49a9-9605-5a6e93152f1c 0xc003e906c7 0xc003e906c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4xccx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4xccx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4xccx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 00:13:55.074: INFO: Pod "webserver-deployment-595b5b9587-5pp5j" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5pp5j webserver-deployment-595b5b9587- deployment-5893 /api/v1/namespaces/deployment-5893/pods/webserver-deployment-595b5b9587-5pp5j 827ed956-341d-4d3b-b9ff-ad6a3062f9ba 4668640 0 2020-04-02 00:13:42 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 100f77d9-e659-49a9-9605-5a6e93152f1c 0xc003e907e7 0xc003e907e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4xccx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4xccx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4xccx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.95,StartTime:2020-04-02 00:13:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-02 00:13:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://fbd2d3648e12c80094031ee3bab48abf837a5bacaf32d249561c4cf0ac7f4bbb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.95,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 00:13:55.074: INFO: Pod "webserver-deployment-595b5b9587-c2xwx" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-c2xwx webserver-deployment-595b5b9587- deployment-5893 /api/v1/namespaces/deployment-5893/pods/webserver-deployment-595b5b9587-c2xwx 6f76f4a3-5088-4831-ac69-bd2eb44ca41c 4668623 0 2020-04-02 00:13:42 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 100f77d9-e659-49a9-9605-5a6e93152f1c 0xc003e90967 0xc003e90968}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4xccx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4xccx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4xccx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.238,StartTime:2020-04-02 00:13:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-02 00:13:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://44f84ab59c35fa141d7f9202e5a569043232a3defdeed9ffeaefbdd72d008e82,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.238,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 00:13:55.075: INFO: Pod "webserver-deployment-595b5b9587-csn89" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-csn89 webserver-deployment-595b5b9587- deployment-5893 /api/v1/namespaces/deployment-5893/pods/webserver-deployment-595b5b9587-csn89 9659a9e8-2fcc-45e1-8681-3e443660e62c 4668785 0 2020-04-02 00:13:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 100f77d9-e659-49a9-9605-5a6e93152f1c 0xc003e90ae7 0xc003e90ae8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4xccx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4xccx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4xccx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 00:13:55.075: INFO: Pod "webserver-deployment-595b5b9587-dmm6r" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dmm6r webserver-deployment-595b5b9587- deployment-5893 /api/v1/namespaces/deployment-5893/pods/webserver-deployment-595b5b9587-dmm6r 8c70d457-083f-480e-b260-87455eec6d38 4668636 0 2020-04-02 00:13:42 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 100f77d9-e659-49a9-9605-5a6e93152f1c 0xc003e90c17 0xc003e90c18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4xccx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4xccx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4xccx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.96,StartTime:2020-04-02 00:13:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-02 00:13:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://fe48930bdafe8eb0d8c5454ca46f5243fa2fbf40d88e52dab914d49ab5c485cc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.96,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 00:13:55.075: INFO: Pod "webserver-deployment-595b5b9587-f6tb6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-f6tb6 webserver-deployment-595b5b9587- deployment-5893 /api/v1/namespaces/deployment-5893/pods/webserver-deployment-595b5b9587-f6tb6 79e14767-c8dc-4c19-b7e9-7e2a345faa9a 4668756 0 2020-04-02 00:13:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 100f77d9-e659-49a9-9605-5a6e93152f1c 0xc003e90d97 0xc003e90d98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4xccx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4xccx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4xccx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 00:13:55.075: INFO: Pod "webserver-deployment-595b5b9587-g758h" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-g758h webserver-deployment-595b5b9587- deployment-5893 /api/v1/namespaces/deployment-5893/pods/webserver-deployment-595b5b9587-g758h b28fb467-7509-4426-a0c7-715a9ff68b63 4668598 0 2020-04-02 00:13:42 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 100f77d9-e659-49a9-9605-5a6e93152f1c 0xc003e90eb7 0xc003e90eb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4xccx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4xccx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4xccx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.93,StartTime:2020-04-02 00:13:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-02 00:13:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ffd7bd7e685748341af37054ac6913000d3aa1b793c1e780d2f2312b89720c3c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.93,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 00:13:55.075: INFO: Pod "webserver-deployment-595b5b9587-h6km6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-h6km6 webserver-deployment-595b5b9587- deployment-5893 /api/v1/namespaces/deployment-5893/pods/webserver-deployment-595b5b9587-h6km6 6ce966da-91b7-40c9-b29b-997eb4bf8eb9 4668784 0 2020-04-02 00:13:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 100f77d9-e659-49a9-9605-5a6e93152f1c 0xc003e91037 0xc003e91038}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4xccx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4xccx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4xccx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 00:13:55.076: INFO: Pod "webserver-deployment-595b5b9587-jtkgf" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jtkgf webserver-deployment-595b5b9587- deployment-5893 /api/v1/namespaces/deployment-5893/pods/webserver-deployment-595b5b9587-jtkgf 77cd1083-33a5-4b09-b135-aa2d62a35f86 4668648 0 2020-04-02 00:13:42 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 100f77d9-e659-49a9-9605-5a6e93152f1c 0xc003e91157 0xc003e91158}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4xccx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4xccx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4xccx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.241,StartTime:2020-04-02 00:13:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-02 00:13:50 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://9b5c187b249d2a6bdefdbf52b5c8555348ec0a71931de7588528f14d27da62a5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.241,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 00:13:55.076: INFO: Pod "webserver-deployment-595b5b9587-kctmw" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-kctmw webserver-deployment-595b5b9587- deployment-5893 /api/v1/namespaces/deployment-5893/pods/webserver-deployment-595b5b9587-kctmw 0c642efe-57be-47a0-990e-8fbc66380a79 4668654 0 2020-04-02 00:13:42 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 100f77d9-e659-49a9-9605-5a6e93152f1c 0xc003e912d7 0xc003e912d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4xccx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4xccx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4xccx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.237,StartTime:2020-04-02 00:13:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-02 00:13:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ebea6fdb1fdb883bf006c6234b170c6a779a05eeac9e09452227e1c473d60b03,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.237,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 00:13:55.076: INFO: Pod "webserver-deployment-595b5b9587-lrvqk" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-lrvqk webserver-deployment-595b5b9587- deployment-5893 /api/v1/namespaces/deployment-5893/pods/webserver-deployment-595b5b9587-lrvqk a21e3c0c-b850-43e9-a71b-25d825047e85 4668810 0 2020-04-02 00:13:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 100f77d9-e659-49a9-9605-5a6e93152f1c 0xc003e91457 0xc003e91458}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4xccx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4xccx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4xccx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-02 00:13:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 00:13:55.076: INFO: Pod "webserver-deployment-595b5b9587-mcjzw" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mcjzw webserver-deployment-595b5b9587- deployment-5893 /api/v1/namespaces/deployment-5893/pods/webserver-deployment-595b5b9587-mcjzw d9e8df01-ea72-49c4-908b-7460f5335bf3 4668662 0 2020-04-02 00:13:42 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 100f77d9-e659-49a9-9605-5a6e93152f1c 0xc003e915b7 0xc003e915b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4xccx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4xccx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4xccx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.97,StartTime:2020-04-02 00:13:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-02 00:13:50 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4e3920758a52f9f394a7303cbb22e888bedfd64c5d9e42707f82fcf5615e94a0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.97,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 00:13:55.076: INFO: Pod "webserver-deployment-595b5b9587-q6jjv" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-q6jjv webserver-deployment-595b5b9587- deployment-5893 /api/v1/namespaces/deployment-5893/pods/webserver-deployment-595b5b9587-q6jjv d18e89e6-af86-42ef-893e-8ba71c09e531 4668786 0 2020-04-02 00:13:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 100f77d9-e659-49a9-9605-5a6e93152f1c 0xc003e91737 0xc003e91738}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4xccx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4xccx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4xccx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 00:13:55.076: INFO: Pod "webserver-deployment-595b5b9587-t596t" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-t596t webserver-deployment-595b5b9587- deployment-5893 /api/v1/namespaces/deployment-5893/pods/webserver-deployment-595b5b9587-t596t d4f626c6-934e-4b9b-b819-46af8782f987 4668769 0 2020-04-02 00:13:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 100f77d9-e659-49a9-9605-5a6e93152f1c 0xc003e91857 0xc003e91858}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4xccx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4xccx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4xccx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 00:13:55.077: INFO: Pod "webserver-deployment-595b5b9587-vwdql" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vwdql webserver-deployment-595b5b9587- deployment-5893 /api/v1/namespaces/deployment-5893/pods/webserver-deployment-595b5b9587-vwdql 02c18f78-faa9-4752-b426-dc7666b67cc6 4668782 0 2020-04-02 00:13:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 100f77d9-e659-49a9-9605-5a6e93152f1c 0xc003e91977 0xc003e91978}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4xccx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4xccx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4xccx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 00:13:55.077: INFO: Pod "webserver-deployment-595b5b9587-w7qpx" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-w7qpx webserver-deployment-595b5b9587- deployment-5893 /api/v1/namespaces/deployment-5893/pods/webserver-deployment-595b5b9587-w7qpx fbae9cc3-e74b-425c-bb90-0572d51d7570 4668749 0 2020-04-02 00:13:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 100f77d9-e659-49a9-9605-5a6e93152f1c 0xc003e91a97 0xc003e91a98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4xccx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4xccx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4xccx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 00:13:55.077: INFO: Pod "webserver-deployment-c7997dcc8-754kc" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-754kc webserver-deployment-c7997dcc8- deployment-5893 /api/v1/namespaces/deployment-5893/pods/webserver-deployment-c7997dcc8-754kc fd444f0d-18fa-4401-a1e1-08969f779eeb 4668779 0 2020-04-02 00:13:54 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e250296f-6915-4999-9ba7-44b604ffb0e1 0xc003e91bb7 0xc003e91bb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4xccx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4xccx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4xccx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 00:13:55.077: INFO: Pod "webserver-deployment-c7997dcc8-8vmj4" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8vmj4 webserver-deployment-c7997dcc8- deployment-5893 /api/v1/namespaces/deployment-5893/pods/webserver-deployment-c7997dcc8-8vmj4 41c0c0bb-930d-4386-8ca9-14fc201f04b9 4668776 0 2020-04-02 00:13:54 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e250296f-6915-4999-9ba7-44b604ffb0e1 0xc003e91ce7 0xc003e91ce8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4xccx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4xccx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4xccx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 00:13:55.077: INFO: Pod "webserver-deployment-c7997dcc8-9h7nm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9h7nm webserver-deployment-c7997dcc8- deployment-5893 /api/v1/namespaces/deployment-5893/pods/webserver-deployment-c7997dcc8-9h7nm b739a336-e426-4d19-8def-fe0d66147b66 4668717 0 2020-04-02 00:13:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e250296f-6915-4999-9ba7-44b604ffb0e1 0xc003e91e17 0xc003e91e18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4xccx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4xccx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4xccx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-02 00:13:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 00:13:55.078: INFO: Pod "webserver-deployment-c7997dcc8-b6jdm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-b6jdm webserver-deployment-c7997dcc8- deployment-5893 /api/v1/namespaces/deployment-5893/pods/webserver-deployment-c7997dcc8-b6jdm 7336df60-ffa2-4e24-baad-7c9aca138750 4668780 0 2020-04-02 00:13:54 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e250296f-6915-4999-9ba7-44b604ffb0e1 0xc003e91f97 0xc003e91f98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4xccx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4xccx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4xccx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 00:13:55.078: INFO: Pod "webserver-deployment-c7997dcc8-cj4z5" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-cj4z5 webserver-deployment-c7997dcc8- deployment-5893 /api/v1/namespaces/deployment-5893/pods/webserver-deployment-c7997dcc8-cj4z5 9e202f7b-4596-4080-add2-fa1f2109f496 4668721 0 2020-04-02 00:13:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e250296f-6915-4999-9ba7-44b604ffb0e1 0xc0020180c7 0xc0020180c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4xccx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4xccx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4xccx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-02 00:13:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 00:13:55.078: INFO: Pod "webserver-deployment-c7997dcc8-hnv6h" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hnv6h webserver-deployment-c7997dcc8- deployment-5893 /api/v1/namespaces/deployment-5893/pods/webserver-deployment-c7997dcc8-hnv6h d5349aac-d2ac-4d5d-b794-eb48be99dbd5 4668713 0 2020-04-02 00:13:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e250296f-6915-4999-9ba7-44b604ffb0e1 0xc002018257 0xc002018258}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4xccx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4xccx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4xccx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-02 00:13:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 00:13:55.078: INFO: Pod "webserver-deployment-c7997dcc8-ksltj" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ksltj webserver-deployment-c7997dcc8- deployment-5893 /api/v1/namespaces/deployment-5893/pods/webserver-deployment-c7997dcc8-ksltj d248fdc0-314b-4120-ae83-4efb9dfb15d6 4668754 0 2020-04-02 00:13:54 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e250296f-6915-4999-9ba7-44b604ffb0e1 0xc0020183f7 0xc0020183f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4xccx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4xccx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4xccx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 00:13:55.078: INFO: Pod "webserver-deployment-c7997dcc8-l8hlz" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-l8hlz webserver-deployment-c7997dcc8- deployment-5893 /api/v1/namespaces/deployment-5893/pods/webserver-deployment-c7997dcc8-l8hlz abd0b6d7-045a-46ee-ba51-1b0fe60eea80 4668752 0 2020-04-02 00:13:54 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e250296f-6915-4999-9ba7-44b604ffb0e1 0xc002018527 0xc002018528}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4xccx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4xccx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4xccx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 00:13:55.078: INFO: Pod "webserver-deployment-c7997dcc8-r94g2" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-r94g2 webserver-deployment-c7997dcc8- deployment-5893 /api/v1/namespaces/deployment-5893/pods/webserver-deployment-c7997dcc8-r94g2 06300539-fb59-49d8-9090-60646babf8f6 4668808 0 2020-04-02 00:13:54 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e250296f-6915-4999-9ba7-44b604ffb0e1 0xc002018657 0xc002018658}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4xccx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4xccx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4xccx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-02 00:13:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 00:13:55.079: INFO: Pod "webserver-deployment-c7997dcc8-tm6ml" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tm6ml webserver-deployment-c7997dcc8- deployment-5893 /api/v1/namespaces/deployment-5893/pods/webserver-deployment-c7997dcc8-tm6ml adf859e9-20b1-4777-88b7-0e83fb995f81 4668781 0 2020-04-02 00:13:54 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e250296f-6915-4999-9ba7-44b604ffb0e1 0xc0020187d7 0xc0020187d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4xccx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4xccx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4xccx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 00:13:55.079: INFO: Pod "webserver-deployment-c7997dcc8-vm6hq" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vm6hq webserver-deployment-c7997dcc8- deployment-5893 /api/v1/namespaces/deployment-5893/pods/webserver-deployment-c7997dcc8-vm6hq 5aba32c7-0fc1-4738-b13b-e52ed6b7fe93 4668697 0 2020-04-02 00:13:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e250296f-6915-4999-9ba7-44b604ffb0e1 0xc002018907 0xc002018908}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4xccx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4xccx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4xccx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-02 00:13:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 00:13:55.079: INFO: Pod "webserver-deployment-c7997dcc8-xvx6s" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xvx6s webserver-deployment-c7997dcc8- deployment-5893 /api/v1/namespaces/deployment-5893/pods/webserver-deployment-c7997dcc8-xvx6s 365ce375-416c-41c9-91f1-22aee4729297 4668788 0 2020-04-02 00:13:54 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e250296f-6915-4999-9ba7-44b604ffb0e1 0xc002018a87 0xc002018a88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4xccx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4xccx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4xccx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 2 00:13:55.079: INFO: Pod "webserver-deployment-c7997dcc8-xx96p" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xx96p webserver-deployment-c7997dcc8- deployment-5893 /api/v1/namespaces/deployment-5893/pods/webserver-deployment-c7997dcc8-xx96p ad349614-646a-4ed6-ab9e-199f39a9380f 4668700 0 2020-04-02 00:13:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e250296f-6915-4999-9ba7-44b604ffb0e1 0xc002018bb7 0xc002018bb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4xccx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4xccx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4xccx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:13:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-02 00:13:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:13:55.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5893" for this suite. • [SLOW TEST:12.787 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":125,"skipped":1973,"failed":0} SSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:13:55.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-798/configmap-test-df3a0dfe-b22f-4f9c-b46f-288c1c087419 STEP: Creating a pod to test consume configMaps Apr 2 00:13:55.415: INFO: Waiting up to 5m0s for pod "pod-configmaps-4c16f538-34ad-4344-b922-32c004675503" in namespace "configmap-798" to be "Succeeded or Failed" Apr 2 00:13:55.461: INFO: Pod "pod-configmaps-4c16f538-34ad-4344-b922-32c004675503": Phase="Pending", Reason="", readiness=false. Elapsed: 46.039713ms Apr 2 00:13:57.464: INFO: Pod "pod-configmaps-4c16f538-34ad-4344-b922-32c004675503": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04909875s Apr 2 00:13:59.658: INFO: Pod "pod-configmaps-4c16f538-34ad-4344-b922-32c004675503": Phase="Pending", Reason="", readiness=false. Elapsed: 4.242844982s Apr 2 00:14:01.830: INFO: Pod "pod-configmaps-4c16f538-34ad-4344-b922-32c004675503": Phase="Pending", Reason="", readiness=false. Elapsed: 6.414901586s Apr 2 00:14:03.841: INFO: Pod "pod-configmaps-4c16f538-34ad-4344-b922-32c004675503": Phase="Pending", Reason="", readiness=false. Elapsed: 8.425891158s Apr 2 00:14:05.979: INFO: Pod "pod-configmaps-4c16f538-34ad-4344-b922-32c004675503": Phase="Pending", Reason="", readiness=false. Elapsed: 10.563745042s Apr 2 00:14:08.429: INFO: Pod "pod-configmaps-4c16f538-34ad-4344-b922-32c004675503": Phase="Pending", Reason="", readiness=false. Elapsed: 13.013827429s Apr 2 00:14:10.484: INFO: Pod "pod-configmaps-4c16f538-34ad-4344-b922-32c004675503": Phase="Pending", Reason="", readiness=false. Elapsed: 15.068869824s Apr 2 00:14:12.544: INFO: Pod "pod-configmaps-4c16f538-34ad-4344-b922-32c004675503": Phase="Running", Reason="", readiness=true. Elapsed: 17.128358971s Apr 2 00:14:14.559: INFO: Pod "pod-configmaps-4c16f538-34ad-4344-b922-32c004675503": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.143846285s STEP: Saw pod success Apr 2 00:14:14.559: INFO: Pod "pod-configmaps-4c16f538-34ad-4344-b922-32c004675503" satisfied condition "Succeeded or Failed" Apr 2 00:14:14.584: INFO: Trying to get logs from node latest-worker pod pod-configmaps-4c16f538-34ad-4344-b922-32c004675503 container env-test: STEP: delete the pod Apr 2 00:14:14.848: INFO: Waiting for pod pod-configmaps-4c16f538-34ad-4344-b922-32c004675503 to disappear Apr 2 00:14:14.865: INFO: Pod pod-configmaps-4c16f538-34ad-4344-b922-32c004675503 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:14:14.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-798" for this suite. • [SLOW TEST:19.727 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":126,"skipped":1978,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:14:14.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 2 00:14:15.220: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7caabb74-e844-43e6-a560-ffa42edb0c85" in namespace "projected-8850" to be "Succeeded or Failed" Apr 2 00:14:15.231: INFO: Pod "downwardapi-volume-7caabb74-e844-43e6-a560-ffa42edb0c85": Phase="Pending", Reason="", readiness=false. Elapsed: 10.564734ms Apr 2 00:14:17.454: INFO: Pod "downwardapi-volume-7caabb74-e844-43e6-a560-ffa42edb0c85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.233973168s Apr 2 00:14:19.458: INFO: Pod "downwardapi-volume-7caabb74-e844-43e6-a560-ffa42edb0c85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.237667273s STEP: Saw pod success Apr 2 00:14:19.458: INFO: Pod "downwardapi-volume-7caabb74-e844-43e6-a560-ffa42edb0c85" satisfied condition "Succeeded or Failed" Apr 2 00:14:19.460: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-7caabb74-e844-43e6-a560-ffa42edb0c85 container client-container: STEP: delete the pod Apr 2 00:14:19.493: INFO: Waiting for pod downwardapi-volume-7caabb74-e844-43e6-a560-ffa42edb0c85 to disappear Apr 2 00:14:19.506: INFO: Pod downwardapi-volume-7caabb74-e844-43e6-a560-ffa42edb0c85 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:14:19.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8850" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":127,"skipped":1979,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:14:19.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:14:23.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5206" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":128,"skipped":1996,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:14:23.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-d7a8234d-73f3-4a14-814f-b940bef44b6b STEP: Creating a pod to test consume configMaps Apr 2 00:14:23.885: INFO: Waiting up to 5m0s for pod "pod-configmaps-f88c9224-5f2c-4934-aa2f-3eb163ff84ce" in namespace "configmap-5156" to be "Succeeded or Failed" Apr 2 00:14:23.906: INFO: Pod "pod-configmaps-f88c9224-5f2c-4934-aa2f-3eb163ff84ce": Phase="Pending", Reason="", readiness=false. Elapsed: 21.577238ms Apr 2 00:14:25.910: INFO: Pod "pod-configmaps-f88c9224-5f2c-4934-aa2f-3eb163ff84ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025412479s Apr 2 00:14:27.914: INFO: Pod "pod-configmaps-f88c9224-5f2c-4934-aa2f-3eb163ff84ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029926241s STEP: Saw pod success Apr 2 00:14:27.915: INFO: Pod "pod-configmaps-f88c9224-5f2c-4934-aa2f-3eb163ff84ce" satisfied condition "Succeeded or Failed" Apr 2 00:14:27.918: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-f88c9224-5f2c-4934-aa2f-3eb163ff84ce container configmap-volume-test: STEP: delete the pod Apr 2 00:14:27.952: INFO: Waiting for pod pod-configmaps-f88c9224-5f2c-4934-aa2f-3eb163ff84ce to disappear Apr 2 00:14:27.955: INFO: Pod pod-configmaps-f88c9224-5f2c-4934-aa2f-3eb163ff84ce no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:14:27.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5156" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":129,"skipped":1999,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:14:27.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-a2319d01-b558-4ff6-9425-1e31f001cef7 STEP: Creating a pod to test consume secrets Apr 2 00:14:28.125: INFO: Waiting up to 5m0s for pod "pod-secrets-eef4f40c-ef0d-4cf0-819d-01f1dcf308db" in namespace "secrets-732" to be "Succeeded or Failed" Apr 2 00:14:28.135: INFO: Pod "pod-secrets-eef4f40c-ef0d-4cf0-819d-01f1dcf308db": Phase="Pending", Reason="", readiness=false. Elapsed: 9.852375ms Apr 2 00:14:30.139: INFO: Pod "pod-secrets-eef4f40c-ef0d-4cf0-819d-01f1dcf308db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013313664s Apr 2 00:14:32.142: INFO: Pod "pod-secrets-eef4f40c-ef0d-4cf0-819d-01f1dcf308db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016732653s STEP: Saw pod success Apr 2 00:14:32.142: INFO: Pod "pod-secrets-eef4f40c-ef0d-4cf0-819d-01f1dcf308db" satisfied condition "Succeeded or Failed" Apr 2 00:14:32.144: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-eef4f40c-ef0d-4cf0-819d-01f1dcf308db container secret-volume-test: STEP: delete the pod Apr 2 00:14:32.160: INFO: Waiting for pod pod-secrets-eef4f40c-ef0d-4cf0-819d-01f1dcf308db to disappear Apr 2 00:14:32.209: INFO: Pod pod-secrets-eef4f40c-ef0d-4cf0-819d-01f1dcf308db no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:14:32.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-732" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":130,"skipped":2007,"failed":0} SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:14:32.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:14:36.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1053" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":131,"skipped":2012,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:14:36.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 2 00:14:36.363: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1876 /api/v1/namespaces/watch-1876/configmaps/e2e-watch-test-configmap-a 69fd9091-ab29-4234-9448-1d50ebd36908 4669357 0 2020-04-02 00:14:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 2 00:14:36.363: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1876 /api/v1/namespaces/watch-1876/configmaps/e2e-watch-test-configmap-a 69fd9091-ab29-4234-9448-1d50ebd36908 4669357 0 2020-04-02 00:14:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 2 00:14:46.376: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1876 /api/v1/namespaces/watch-1876/configmaps/e2e-watch-test-configmap-a 69fd9091-ab29-4234-9448-1d50ebd36908 4669402 0 2020-04-02 00:14:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 2 00:14:46.376: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1876 /api/v1/namespaces/watch-1876/configmaps/e2e-watch-test-configmap-a 69fd9091-ab29-4234-9448-1d50ebd36908 4669402 0 2020-04-02 00:14:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 2 00:14:56.384: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1876 /api/v1/namespaces/watch-1876/configmaps/e2e-watch-test-configmap-a 69fd9091-ab29-4234-9448-1d50ebd36908 4669432 0 2020-04-02 00:14:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 2 00:14:56.384: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1876 /api/v1/namespaces/watch-1876/configmaps/e2e-watch-test-configmap-a 69fd9091-ab29-4234-9448-1d50ebd36908 4669432 0 2020-04-02 00:14:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 2 00:15:06.392: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1876 /api/v1/namespaces/watch-1876/configmaps/e2e-watch-test-configmap-a 69fd9091-ab29-4234-9448-1d50ebd36908 4669465 0 2020-04-02 00:14:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 2 00:15:06.392: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1876 /api/v1/namespaces/watch-1876/configmaps/e2e-watch-test-configmap-a 69fd9091-ab29-4234-9448-1d50ebd36908 4669465 0 2020-04-02 00:14:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 2 00:15:16.400: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1876 /api/v1/namespaces/watch-1876/configmaps/e2e-watch-test-configmap-b a03439b1-15ac-4c2c-ba20-a2f598653df3 4669500 0 2020-04-02 00:15:16 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 2 00:15:16.400: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1876 /api/v1/namespaces/watch-1876/configmaps/e2e-watch-test-configmap-b a03439b1-15ac-4c2c-ba20-a2f598653df3 4669500 0 2020-04-02 00:15:16 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 2 00:15:26.407: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1876 /api/v1/namespaces/watch-1876/configmaps/e2e-watch-test-configmap-b a03439b1-15ac-4c2c-ba20-a2f598653df3 4669532 0 2020-04-02 00:15:16 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 2 00:15:26.407: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1876 /api/v1/namespaces/watch-1876/configmaps/e2e-watch-test-configmap-b a03439b1-15ac-4c2c-ba20-a2f598653df3 4669532 0 2020-04-02 00:15:16 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:15:36.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1876" for this suite. • [SLOW TEST:60.118 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":132,"skipped":2047,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:15:36.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2312.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2312.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2312.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2312.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2312.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2312.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2312.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2312.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2312.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2312.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2312.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 121.56.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.56.121_udp@PTR;check="$$(dig +tcp +noall +answer +search 121.56.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.56.121_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2312.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2312.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2312.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2312.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2312.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2312.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2312.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2312.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2312.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2312.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2312.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 121.56.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.56.121_udp@PTR;check="$$(dig +tcp +noall +answer +search 121.56.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.56.121_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 2 00:15:42.579: INFO: Unable to read wheezy_udp@dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:15:42.583: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:15:42.586: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:15:42.589: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:15:42.613: INFO: Unable to read jessie_udp@dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:15:42.619: INFO: Unable to read jessie_tcp@dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:15:42.623: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:15:42.626: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:15:42.639: INFO: Lookups using dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837 failed for: [wheezy_udp@dns-test-service.dns-2312.svc.cluster.local wheezy_tcp@dns-test-service.dns-2312.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local jessie_udp@dns-test-service.dns-2312.svc.cluster.local jessie_tcp@dns-test-service.dns-2312.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local] Apr 2 00:15:47.644: INFO: Unable to read wheezy_udp@dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:15:47.659: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:15:47.661: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:15:47.664: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:15:47.685: INFO: Unable to read jessie_udp@dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:15:47.688: INFO: Unable to read jessie_tcp@dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:15:47.691: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:15:47.694: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:15:47.713: INFO: Lookups using dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837 failed for: [wheezy_udp@dns-test-service.dns-2312.svc.cluster.local wheezy_tcp@dns-test-service.dns-2312.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local jessie_udp@dns-test-service.dns-2312.svc.cluster.local jessie_tcp@dns-test-service.dns-2312.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local] Apr 2 00:15:52.644: INFO: Unable to read wheezy_udp@dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:15:52.648: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:15:52.652: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:15:52.655: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:15:52.677: INFO: Unable to read jessie_udp@dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:15:52.680: INFO: Unable to read jessie_tcp@dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:15:52.683: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:15:52.685: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:15:52.699: INFO: Lookups using dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837 failed for: [wheezy_udp@dns-test-service.dns-2312.svc.cluster.local wheezy_tcp@dns-test-service.dns-2312.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local jessie_udp@dns-test-service.dns-2312.svc.cluster.local jessie_tcp@dns-test-service.dns-2312.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local] Apr 2 00:15:57.644: INFO: Unable to read wheezy_udp@dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:15:57.648: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:15:57.652: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:15:57.655: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:15:57.675: INFO: Unable to read jessie_udp@dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:15:57.678: INFO: Unable to read jessie_tcp@dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:15:57.681: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:15:57.684: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:15:57.702: INFO: Lookups using dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837 failed for: [wheezy_udp@dns-test-service.dns-2312.svc.cluster.local wheezy_tcp@dns-test-service.dns-2312.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local jessie_udp@dns-test-service.dns-2312.svc.cluster.local jessie_tcp@dns-test-service.dns-2312.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local] Apr 2 00:16:02.643: INFO: Unable to read wheezy_udp@dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:16:02.645: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:16:02.648: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:16:02.650: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:16:02.668: INFO: Unable to read jessie_udp@dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:16:02.671: INFO: Unable to read jessie_tcp@dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:16:02.674: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:16:02.677: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:16:02.696: INFO: Lookups using dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837 failed for: [wheezy_udp@dns-test-service.dns-2312.svc.cluster.local wheezy_tcp@dns-test-service.dns-2312.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local jessie_udp@dns-test-service.dns-2312.svc.cluster.local jessie_tcp@dns-test-service.dns-2312.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local] Apr 2 00:16:07.646: INFO: Unable to read wheezy_udp@dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:16:07.652: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:16:07.707: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:16:07.711: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:16:07.731: INFO: Unable to read jessie_udp@dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:16:07.733: INFO: Unable to read jessie_tcp@dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:16:07.735: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:16:07.738: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local from pod dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837: the server could not find the requested resource (get pods dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837) Apr 2 00:16:07.754: INFO: Lookups using dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837 failed for: [wheezy_udp@dns-test-service.dns-2312.svc.cluster.local wheezy_tcp@dns-test-service.dns-2312.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local jessie_udp@dns-test-service.dns-2312.svc.cluster.local jessie_tcp@dns-test-service.dns-2312.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2312.svc.cluster.local] Apr 2 00:16:12.703: INFO: DNS probes using dns-2312/dns-test-6ea1071f-8f96-4fe9-a1a0-b1802c330837 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:16:13.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2312" for this suite. • [SLOW TEST:36.837 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":275,"completed":133,"skipped":2059,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:16:13.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Apr 2 00:16:13.324: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:16:28.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2249" for this suite. • [SLOW TEST:15.311 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":134,"skipped":2105,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:16:28.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 2 00:16:33.220: INFO: Successfully updated pod "annotationupdate61309980-350e-4f33-aeb4-4d6535f800be" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:16:35.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3692" for this suite. • [SLOW TEST:6.696 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":135,"skipped":2116,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:16:35.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 2 00:16:39.863: INFO: Successfully updated pod "labelsupdate1747c3ac-fe25-441e-9c14-fc6379da3ede" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:16:41.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8149" for this suite. • [SLOW TEST:6.665 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":136,"skipped":2117,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:16:41.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 2 00:16:41.985: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d8eb11bd-d8e5-4003-993f-c2f335183059" in namespace "downward-api-6905" to be "Succeeded or Failed" Apr 2 00:16:41.989: INFO: Pod "downwardapi-volume-d8eb11bd-d8e5-4003-993f-c2f335183059": Phase="Pending", Reason="", readiness=false. Elapsed: 3.810265ms Apr 2 00:16:43.993: INFO: Pod "downwardapi-volume-d8eb11bd-d8e5-4003-993f-c2f335183059": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007907272s Apr 2 00:16:45.996: INFO: Pod "downwardapi-volume-d8eb11bd-d8e5-4003-993f-c2f335183059": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010944307s STEP: Saw pod success Apr 2 00:16:45.996: INFO: Pod "downwardapi-volume-d8eb11bd-d8e5-4003-993f-c2f335183059" satisfied condition "Succeeded or Failed" Apr 2 00:16:45.998: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-d8eb11bd-d8e5-4003-993f-c2f335183059 container client-container: STEP: delete the pod Apr 2 00:16:46.028: INFO: Waiting for pod downwardapi-volume-d8eb11bd-d8e5-4003-993f-c2f335183059 to disappear Apr 2 00:16:46.037: INFO: Pod downwardapi-volume-d8eb11bd-d8e5-4003-993f-c2f335183059 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:16:46.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6905" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":137,"skipped":2222,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:16:46.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 2 00:16:46.914: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 2 00:16:48.926: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721383406, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721383406, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721383406, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721383406, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 2 00:16:51.952: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:16:52.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9076" for this suite. STEP: Destroying namespace "webhook-9076-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.384 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":138,"skipped":2223,"failed":0} SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:16:52.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 00:16:52.502: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 2 00:16:52.564: INFO: Number of nodes with available pods: 0 Apr 2 00:16:52.564: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 2 00:16:52.593: INFO: Number of nodes with available pods: 0 Apr 2 00:16:52.593: INFO: Node latest-worker is running more than one daemon pod Apr 2 00:16:53.598: INFO: Number of nodes with available pods: 0 Apr 2 00:16:53.598: INFO: Node latest-worker is running more than one daemon pod Apr 2 00:16:54.598: INFO: Number of nodes with available pods: 0 Apr 2 00:16:54.598: INFO: Node latest-worker is running more than one daemon pod Apr 2 00:16:55.598: INFO: Number of nodes with available pods: 1 Apr 2 00:16:55.598: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 2 00:16:55.640: INFO: Number of nodes with available pods: 1 Apr 2 00:16:55.640: INFO: Number of running nodes: 0, number of available pods: 1 Apr 2 00:16:56.644: INFO: Number of nodes with available pods: 0 Apr 2 00:16:56.644: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 2 00:16:56.666: INFO: Number of nodes with available pods: 0 Apr 2 00:16:56.666: INFO: Node latest-worker is running more than one daemon pod Apr 2 00:16:57.749: INFO: Number of nodes with available pods: 0 Apr 2 00:16:57.749: INFO: Node latest-worker is running more than one daemon pod Apr 2 00:16:58.670: INFO: Number of nodes with available pods: 0 Apr 2 00:16:58.670: INFO: Node latest-worker is running more than one daemon pod Apr 2 00:16:59.671: INFO: Number of nodes with available pods: 0 Apr 2 00:16:59.671: INFO: Node latest-worker is running more than one daemon pod Apr 2 00:17:00.671: INFO: Number of nodes with available pods: 0 Apr 2 00:17:00.671: INFO: Node latest-worker is running more than one daemon pod Apr 2 00:17:01.671: INFO: Number of nodes with available pods: 0 Apr 2 00:17:01.671: INFO: Node latest-worker is running more than one daemon pod Apr 2 00:17:02.671: INFO: Number of nodes with available pods: 1 Apr 2 00:17:02.671: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-829, will wait for the garbage collector to delete the pods Apr 2 00:17:02.736: INFO: Deleting DaemonSet.extensions daemon-set took: 6.465052ms Apr 2 00:17:02.836: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.268639ms Apr 2 00:17:12.840: INFO: Number of nodes with available pods: 0 Apr 2 00:17:12.840: INFO: Number of running nodes: 0, number of available pods: 0 Apr 2 00:17:12.842: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-829/daemonsets","resourceVersion":"4670141"},"items":null} Apr 2 00:17:12.844: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-829/pods","resourceVersion":"4670141"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:17:12.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-829" for this suite. • [SLOW TEST:20.453 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":139,"skipped":2230,"failed":0} [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:17:12.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 2 00:17:12.923: INFO: PodSpec: initContainers in spec.initContainers Apr 2 00:17:59.871: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-aac76dac-835f-40ab-aaf2-ee262c2e2b6a", GenerateName:"", Namespace:"init-container-7761", SelfLink:"/api/v1/namespaces/init-container-7761/pods/pod-init-aac76dac-835f-40ab-aaf2-ee262c2e2b6a", UID:"5a2b0395-6945-4630-bff6-3f23b123f3d8", ResourceVersion:"4670329", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63721383432, loc:(*time.Location)(0x7b1e080)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"923166086"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-vxp2x", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0052a9600), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vxp2x", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vxp2x", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vxp2x", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003db0f38), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002c22af0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003db1040)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003db1060)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003db1068), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc003db106c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721383433, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721383433, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721383433, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721383432, loc:(*time.Location)(0x7b1e080)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.12", PodIP:"10.244.1.117", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.117"}}, StartTime:(*v1.Time)(0xc002e92b20), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002c22bd0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002c22c40)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://131418b867af6f14373313031fa542ff0a3811332542dfa2f444e97239726730", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002e92b60), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002e92b40), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc003db10ef)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:17:59.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7761" for this suite. • [SLOW TEST:47.002 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":140,"skipped":2230,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:17:59.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8322 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8322;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8322 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8322;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8322.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8322.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8322.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8322.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8322.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8322.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8322.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8322.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8322.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8322.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8322.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8322.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8322.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 160.131.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.131.160_udp@PTR;check="$$(dig +tcp +noall +answer +search 160.131.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.131.160_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8322 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8322;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8322 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8322;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8322.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8322.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8322.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8322.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8322.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8322.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8322.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8322.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8322.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8322.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8322.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8322.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8322.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 160.131.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.131.160_udp@PTR;check="$$(dig +tcp +noall +answer +search 160.131.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.131.160_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 2 00:18:06.032: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:06.035: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:06.039: INFO: Unable to read wheezy_udp@dns-test-service.dns-8322 from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:06.042: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8322 from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:06.045: INFO: Unable to read wheezy_udp@dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:06.048: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:06.051: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:06.055: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:06.075: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:06.077: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:06.080: INFO: Unable to read jessie_udp@dns-test-service.dns-8322 from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:06.083: INFO: Unable to read jessie_tcp@dns-test-service.dns-8322 from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:06.087: INFO: Unable to read jessie_udp@dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:06.090: INFO: Unable to read jessie_tcp@dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:06.093: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:06.096: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:06.113: INFO: Lookups using dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8322 wheezy_tcp@dns-test-service.dns-8322 wheezy_udp@dns-test-service.dns-8322.svc wheezy_tcp@dns-test-service.dns-8322.svc wheezy_udp@_http._tcp.dns-test-service.dns-8322.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8322.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8322 jessie_tcp@dns-test-service.dns-8322 jessie_udp@dns-test-service.dns-8322.svc jessie_tcp@dns-test-service.dns-8322.svc jessie_udp@_http._tcp.dns-test-service.dns-8322.svc jessie_tcp@_http._tcp.dns-test-service.dns-8322.svc] Apr 2 00:18:11.118: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:11.121: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:11.125: INFO: Unable to read wheezy_udp@dns-test-service.dns-8322 from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:11.128: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8322 from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:11.131: INFO: Unable to read wheezy_udp@dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:11.134: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:11.137: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:11.139: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:11.161: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:11.164: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:11.166: INFO: Unable to read jessie_udp@dns-test-service.dns-8322 from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:11.169: INFO: Unable to read jessie_tcp@dns-test-service.dns-8322 from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:11.172: INFO: Unable to read jessie_udp@dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:11.175: INFO: Unable to read jessie_tcp@dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:11.178: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:11.181: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:11.201: INFO: Lookups using dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8322 wheezy_tcp@dns-test-service.dns-8322 wheezy_udp@dns-test-service.dns-8322.svc wheezy_tcp@dns-test-service.dns-8322.svc wheezy_udp@_http._tcp.dns-test-service.dns-8322.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8322.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8322 jessie_tcp@dns-test-service.dns-8322 jessie_udp@dns-test-service.dns-8322.svc jessie_tcp@dns-test-service.dns-8322.svc jessie_udp@_http._tcp.dns-test-service.dns-8322.svc jessie_tcp@_http._tcp.dns-test-service.dns-8322.svc] Apr 2 00:18:16.119: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:16.123: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:16.127: INFO: Unable to read wheezy_udp@dns-test-service.dns-8322 from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:16.129: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8322 from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:16.134: INFO: Unable to read wheezy_udp@dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:16.137: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:16.140: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:16.143: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:16.163: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:16.166: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:16.168: INFO: Unable to read jessie_udp@dns-test-service.dns-8322 from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:16.171: INFO: Unable to read jessie_tcp@dns-test-service.dns-8322 from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:16.174: INFO: Unable to read jessie_udp@dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:16.176: INFO: Unable to read jessie_tcp@dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:16.179: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:16.182: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:16.199: INFO: Lookups using dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8322 wheezy_tcp@dns-test-service.dns-8322 wheezy_udp@dns-test-service.dns-8322.svc wheezy_tcp@dns-test-service.dns-8322.svc wheezy_udp@_http._tcp.dns-test-service.dns-8322.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8322.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8322 jessie_tcp@dns-test-service.dns-8322 jessie_udp@dns-test-service.dns-8322.svc jessie_tcp@dns-test-service.dns-8322.svc jessie_udp@_http._tcp.dns-test-service.dns-8322.svc jessie_tcp@_http._tcp.dns-test-service.dns-8322.svc] Apr 2 00:18:21.118: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:21.122: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:21.126: INFO: Unable to read wheezy_udp@dns-test-service.dns-8322 from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:21.131: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8322 from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:21.134: INFO: Unable to read wheezy_udp@dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:21.137: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:21.140: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:21.143: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:21.165: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:21.168: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:21.171: INFO: Unable to read jessie_udp@dns-test-service.dns-8322 from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:21.174: INFO: Unable to read jessie_tcp@dns-test-service.dns-8322 from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:21.176: INFO: Unable to read jessie_udp@dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:21.180: INFO: Unable to read jessie_tcp@dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:21.183: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:21.186: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:21.203: INFO: Lookups using dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8322 wheezy_tcp@dns-test-service.dns-8322 wheezy_udp@dns-test-service.dns-8322.svc wheezy_tcp@dns-test-service.dns-8322.svc wheezy_udp@_http._tcp.dns-test-service.dns-8322.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8322.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8322 jessie_tcp@dns-test-service.dns-8322 jessie_udp@dns-test-service.dns-8322.svc jessie_tcp@dns-test-service.dns-8322.svc jessie_udp@_http._tcp.dns-test-service.dns-8322.svc jessie_tcp@_http._tcp.dns-test-service.dns-8322.svc] Apr 2 00:18:26.125: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:26.152: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:26.155: INFO: Unable to read wheezy_udp@dns-test-service.dns-8322 from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:26.158: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8322 from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:26.161: INFO: Unable to read wheezy_udp@dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:26.163: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:26.166: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:26.169: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:26.191: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:26.194: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:26.196: INFO: Unable to read jessie_udp@dns-test-service.dns-8322 from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:26.199: INFO: Unable to read jessie_tcp@dns-test-service.dns-8322 from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:26.202: INFO: Unable to read jessie_udp@dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:26.205: INFO: Unable to read jessie_tcp@dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:26.208: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:26.211: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:26.231: INFO: Lookups using dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8322 wheezy_tcp@dns-test-service.dns-8322 wheezy_udp@dns-test-service.dns-8322.svc wheezy_tcp@dns-test-service.dns-8322.svc wheezy_udp@_http._tcp.dns-test-service.dns-8322.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8322.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8322 jessie_tcp@dns-test-service.dns-8322 jessie_udp@dns-test-service.dns-8322.svc jessie_tcp@dns-test-service.dns-8322.svc jessie_udp@_http._tcp.dns-test-service.dns-8322.svc jessie_tcp@_http._tcp.dns-test-service.dns-8322.svc] Apr 2 00:18:31.118: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:31.122: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:31.126: INFO: Unable to read wheezy_udp@dns-test-service.dns-8322 from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:31.130: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8322 from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:31.133: INFO: Unable to read wheezy_udp@dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:31.136: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:31.140: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:31.143: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:31.165: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:31.169: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:31.172: INFO: Unable to read jessie_udp@dns-test-service.dns-8322 from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:31.174: INFO: Unable to read jessie_tcp@dns-test-service.dns-8322 from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:31.176: INFO: Unable to read jessie_udp@dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:31.179: INFO: Unable to read jessie_tcp@dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:31.181: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:31.184: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8322.svc from pod dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9: the server could not find the requested resource (get pods dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9) Apr 2 00:18:31.201: INFO: Lookups using dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8322 wheezy_tcp@dns-test-service.dns-8322 wheezy_udp@dns-test-service.dns-8322.svc wheezy_tcp@dns-test-service.dns-8322.svc wheezy_udp@_http._tcp.dns-test-service.dns-8322.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8322.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8322 jessie_tcp@dns-test-service.dns-8322 jessie_udp@dns-test-service.dns-8322.svc jessie_tcp@dns-test-service.dns-8322.svc jessie_udp@_http._tcp.dns-test-service.dns-8322.svc jessie_tcp@_http._tcp.dns-test-service.dns-8322.svc] Apr 2 00:18:36.197: INFO: DNS probes using dns-8322/dns-test-1b199c80-6ef3-4baa-b79b-ee4105d364d9 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:18:36.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8322" for this suite. • [SLOW TEST:36.775 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":141,"skipped":2254,"failed":0} SSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:18:36.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name secret-emptykey-test-ce1efe5a-3daa-4840-9053-576fd8538cdc [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:18:36.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1550" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":142,"skipped":2257,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:18:36.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-aeff2509-f965-4378-913f-562a38e92e68 STEP: Creating configMap with name cm-test-opt-upd-3dc25e30-21c3-44c6-88a7-c9079e8cfe18 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-aeff2509-f965-4378-913f-562a38e92e68 STEP: Updating configmap cm-test-opt-upd-3dc25e30-21c3-44c6-88a7-c9079e8cfe18 STEP: Creating configMap with name cm-test-opt-create-12b0fbf4-23ca-4e0f-bc2d-0c00e4798f2d STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:20:03.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2352" for this suite. • [SLOW TEST:86.521 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":143,"skipped":2267,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:20:03.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token Apr 2 00:20:03.862: INFO: created pod pod-service-account-defaultsa Apr 2 00:20:03.862: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 2 00:20:03.872: INFO: created pod pod-service-account-mountsa Apr 2 00:20:03.872: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 2 00:20:03.897: INFO: created pod pod-service-account-nomountsa Apr 2 00:20:03.897: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 2 00:20:03.909: INFO: created pod pod-service-account-defaultsa-mountspec Apr 2 00:20:03.909: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 2 00:20:03.946: INFO: created pod pod-service-account-mountsa-mountspec Apr 2 00:20:03.946: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 2 00:20:03.999: INFO: created pod pod-service-account-nomountsa-mountspec Apr 2 00:20:03.999: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 2 00:20:04.024: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 2 00:20:04.024: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 2 00:20:04.069: INFO: created pod pod-service-account-mountsa-nomountspec Apr 2 00:20:04.069: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 2 00:20:04.076: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 2 00:20:04.076: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:20:04.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2955" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":275,"completed":144,"skipped":2324,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:20:04.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-cbl9 STEP: Creating a pod to test atomic-volume-subpath Apr 2 00:20:04.276: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-cbl9" in namespace "subpath-5692" to be "Succeeded or Failed" Apr 2 00:20:04.280: INFO: Pod "pod-subpath-test-configmap-cbl9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.332385ms Apr 2 00:20:06.284: INFO: Pod "pod-subpath-test-configmap-cbl9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007473322s Apr 2 00:20:08.287: INFO: Pod "pod-subpath-test-configmap-cbl9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011010271s Apr 2 00:20:10.430: INFO: Pod "pod-subpath-test-configmap-cbl9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.153935831s Apr 2 00:20:12.447: INFO: Pod "pod-subpath-test-configmap-cbl9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.170568503s Apr 2 00:20:14.520: INFO: Pod "pod-subpath-test-configmap-cbl9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.243809541s Apr 2 00:20:16.524: INFO: Pod "pod-subpath-test-configmap-cbl9": Phase="Running", Reason="", readiness=true. Elapsed: 12.247990621s Apr 2 00:20:18.529: INFO: Pod "pod-subpath-test-configmap-cbl9": Phase="Running", Reason="", readiness=true. Elapsed: 14.252678064s Apr 2 00:20:20.534: INFO: Pod "pod-subpath-test-configmap-cbl9": Phase="Running", Reason="", readiness=true. Elapsed: 16.257246246s Apr 2 00:20:22.538: INFO: Pod "pod-subpath-test-configmap-cbl9": Phase="Running", Reason="", readiness=true. Elapsed: 18.261726942s Apr 2 00:20:24.543: INFO: Pod "pod-subpath-test-configmap-cbl9": Phase="Running", Reason="", readiness=true. Elapsed: 20.26622997s Apr 2 00:20:26.547: INFO: Pod "pod-subpath-test-configmap-cbl9": Phase="Running", Reason="", readiness=true. Elapsed: 22.270583644s Apr 2 00:20:28.551: INFO: Pod "pod-subpath-test-configmap-cbl9": Phase="Running", Reason="", readiness=true. Elapsed: 24.274929376s Apr 2 00:20:30.556: INFO: Pod "pod-subpath-test-configmap-cbl9": Phase="Running", Reason="", readiness=true. Elapsed: 26.279298786s Apr 2 00:20:32.560: INFO: Pod "pod-subpath-test-configmap-cbl9": Phase="Running", Reason="", readiness=true. Elapsed: 28.283796911s Apr 2 00:20:34.564: INFO: Pod "pod-subpath-test-configmap-cbl9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.287800968s STEP: Saw pod success Apr 2 00:20:34.564: INFO: Pod "pod-subpath-test-configmap-cbl9" satisfied condition "Succeeded or Failed" Apr 2 00:20:34.568: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-cbl9 container test-container-subpath-configmap-cbl9: STEP: delete the pod Apr 2 00:20:34.587: INFO: Waiting for pod pod-subpath-test-configmap-cbl9 to disappear Apr 2 00:20:34.707: INFO: Pod pod-subpath-test-configmap-cbl9 no longer exists STEP: Deleting pod pod-subpath-test-configmap-cbl9 Apr 2 00:20:34.707: INFO: Deleting pod "pod-subpath-test-configmap-cbl9" in namespace "subpath-5692" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:20:34.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5692" for this suite. • [SLOW TEST:30.559 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":145,"skipped":2363,"failed":0} SS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:20:34.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 00:20:34.917: INFO: The status of Pod test-webserver-910a5653-07bb-422e-acfe-aced25a7f86a is Pending, waiting for it to be Running (with Ready = true) Apr 2 00:20:36.921: INFO: The status of Pod test-webserver-910a5653-07bb-422e-acfe-aced25a7f86a is Pending, waiting for it to be Running (with Ready = true) Apr 2 00:20:38.921: INFO: The status of Pod test-webserver-910a5653-07bb-422e-acfe-aced25a7f86a is Running (Ready = false) Apr 2 00:20:40.922: INFO: The status of Pod test-webserver-910a5653-07bb-422e-acfe-aced25a7f86a is Running (Ready = false) Apr 2 00:20:42.921: INFO: The status of Pod test-webserver-910a5653-07bb-422e-acfe-aced25a7f86a is Running (Ready = false) Apr 2 00:20:44.921: INFO: The status of Pod test-webserver-910a5653-07bb-422e-acfe-aced25a7f86a is Running (Ready = false) Apr 2 00:20:46.921: INFO: The status of Pod test-webserver-910a5653-07bb-422e-acfe-aced25a7f86a is Running (Ready = false) Apr 2 00:20:48.921: INFO: The status of Pod test-webserver-910a5653-07bb-422e-acfe-aced25a7f86a is Running (Ready = false) Apr 2 00:20:50.921: INFO: The status of Pod test-webserver-910a5653-07bb-422e-acfe-aced25a7f86a is Running (Ready = false) Apr 2 00:20:52.921: INFO: The status of Pod test-webserver-910a5653-07bb-422e-acfe-aced25a7f86a is Running (Ready = false) Apr 2 00:20:54.921: INFO: The status of Pod test-webserver-910a5653-07bb-422e-acfe-aced25a7f86a is Running (Ready = false) Apr 2 00:20:56.921: INFO: The status of Pod test-webserver-910a5653-07bb-422e-acfe-aced25a7f86a is Running (Ready = false) Apr 2 00:20:58.921: INFO: The status of Pod test-webserver-910a5653-07bb-422e-acfe-aced25a7f86a is Running (Ready = true) Apr 2 00:20:58.924: INFO: Container started at 2020-04-02 00:20:36 +0000 UTC, pod became ready at 2020-04-02 00:20:58 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:20:58.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7236" for this suite. • [SLOW TEST:24.216 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":146,"skipped":2365,"failed":0} SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:20:58.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 2 00:20:58.998: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:21:03.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5588" for this suite. •{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":147,"skipped":2372,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:21:03.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 00:21:03.588: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Apr 2 00:21:05.664: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:21:06.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9480" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":148,"skipped":2391,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:21:06.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 2 00:21:06.980: INFO: Waiting up to 5m0s for pod "pod-d617c475-40b3-4f49-b95a-c6f5cc0089b2" in namespace "emptydir-1483" to be "Succeeded or Failed" Apr 2 00:21:07.026: INFO: Pod "pod-d617c475-40b3-4f49-b95a-c6f5cc0089b2": Phase="Pending", Reason="", readiness=false. Elapsed: 46.049306ms Apr 2 00:21:09.029: INFO: Pod "pod-d617c475-40b3-4f49-b95a-c6f5cc0089b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049211336s Apr 2 00:21:11.033: INFO: Pod "pod-d617c475-40b3-4f49-b95a-c6f5cc0089b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053400522s STEP: Saw pod success Apr 2 00:21:11.033: INFO: Pod "pod-d617c475-40b3-4f49-b95a-c6f5cc0089b2" satisfied condition "Succeeded or Failed" Apr 2 00:21:11.040: INFO: Trying to get logs from node latest-worker pod pod-d617c475-40b3-4f49-b95a-c6f5cc0089b2 container test-container: STEP: delete the pod Apr 2 00:21:11.080: INFO: Waiting for pod pod-d617c475-40b3-4f49-b95a-c6f5cc0089b2 to disappear Apr 2 00:21:11.085: INFO: Pod pod-d617c475-40b3-4f49-b95a-c6f5cc0089b2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:21:11.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1483" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":149,"skipped":2411,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:21:11.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 2 00:21:11.160: INFO: Waiting up to 5m0s for pod "pod-a15c1aca-5aa5-4001-ae9b-dd9335456af6" in namespace "emptydir-2000" to be "Succeeded or Failed" Apr 2 00:21:11.164: INFO: Pod "pod-a15c1aca-5aa5-4001-ae9b-dd9335456af6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.591394ms Apr 2 00:21:13.226: INFO: Pod "pod-a15c1aca-5aa5-4001-ae9b-dd9335456af6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066319714s Apr 2 00:21:15.250: INFO: Pod "pod-a15c1aca-5aa5-4001-ae9b-dd9335456af6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.09027323s STEP: Saw pod success Apr 2 00:21:15.250: INFO: Pod "pod-a15c1aca-5aa5-4001-ae9b-dd9335456af6" satisfied condition "Succeeded or Failed" Apr 2 00:21:15.253: INFO: Trying to get logs from node latest-worker pod pod-a15c1aca-5aa5-4001-ae9b-dd9335456af6 container test-container: STEP: delete the pod Apr 2 00:21:15.285: INFO: Waiting for pod pod-a15c1aca-5aa5-4001-ae9b-dd9335456af6 to disappear Apr 2 00:21:15.299: INFO: Pod pod-a15c1aca-5aa5-4001-ae9b-dd9335456af6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:21:15.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2000" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":150,"skipped":2437,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:21:15.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 2 00:21:15.378: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c6ae6a58-f98c-49d2-979c-a9257a665674" in namespace "projected-336" to be "Succeeded or Failed" Apr 2 00:21:15.383: INFO: Pod "downwardapi-volume-c6ae6a58-f98c-49d2-979c-a9257a665674": Phase="Pending", Reason="", readiness=false. Elapsed: 4.50075ms Apr 2 00:21:17.387: INFO: Pod "downwardapi-volume-c6ae6a58-f98c-49d2-979c-a9257a665674": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008567996s Apr 2 00:21:19.391: INFO: Pod "downwardapi-volume-c6ae6a58-f98c-49d2-979c-a9257a665674": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012429524s STEP: Saw pod success Apr 2 00:21:19.391: INFO: Pod "downwardapi-volume-c6ae6a58-f98c-49d2-979c-a9257a665674" satisfied condition "Succeeded or Failed" Apr 2 00:21:19.394: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-c6ae6a58-f98c-49d2-979c-a9257a665674 container client-container: STEP: delete the pod Apr 2 00:21:19.425: INFO: Waiting for pod downwardapi-volume-c6ae6a58-f98c-49d2-979c-a9257a665674 to disappear Apr 2 00:21:19.431: INFO: Pod downwardapi-volume-c6ae6a58-f98c-49d2-979c-a9257a665674 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:21:19.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-336" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":151,"skipped":2445,"failed":0} SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:21:19.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 2 00:21:19.503: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 2 00:21:19.511: INFO: Waiting for terminating namespaces to be deleted... Apr 2 00:21:19.513: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 2 00:21:19.518: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 2 00:21:19.518: INFO: Container kindnet-cni ready: true, restart count 0 Apr 2 00:21:19.518: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 2 00:21:19.518: INFO: Container kube-proxy ready: true, restart count 0 Apr 2 00:21:19.518: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 2 00:21:19.533: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 2 00:21:19.533: INFO: Container kindnet-cni ready: true, restart count 0 Apr 2 00:21:19.533: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 2 00:21:19.533: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Apr 2 00:21:19.602: INFO: Pod kindnet-vnjgh requesting resource cpu=100m on Node latest-worker Apr 2 00:21:19.602: INFO: Pod kindnet-zq6gp requesting resource cpu=100m on Node latest-worker2 Apr 2 00:21:19.602: INFO: Pod kube-proxy-c5xlk requesting resource cpu=0m on Node latest-worker2 Apr 2 00:21:19.602: INFO: Pod kube-proxy-s9v6p requesting resource cpu=0m on Node latest-worker STEP: Starting Pods to consume most of the cluster CPU. Apr 2 00:21:19.602: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker Apr 2 00:21:19.636: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-9e66aeee-9f8a-4471-90eb-c13d28b80e19.1601d8c3e27686e8], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5965/filler-pod-9e66aeee-9f8a-4471-90eb-c13d28b80e19 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-9e66aeee-9f8a-4471-90eb-c13d28b80e19.1601d8c43192bdf8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-9e66aeee-9f8a-4471-90eb-c13d28b80e19.1601d8c46429d779], Reason = [Created], Message = [Created container filler-pod-9e66aeee-9f8a-4471-90eb-c13d28b80e19] STEP: Considering event: Type = [Normal], Name = [filler-pod-9e66aeee-9f8a-4471-90eb-c13d28b80e19.1601d8c4789553be], Reason = [Started], Message = [Started container filler-pod-9e66aeee-9f8a-4471-90eb-c13d28b80e19] STEP: Considering event: Type = [Normal], Name = [filler-pod-f420e681-4f10-4cbf-b5a0-c139f13cfb07.1601d8c3e3e61a4a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5965/filler-pod-f420e681-4f10-4cbf-b5a0-c139f13cfb07 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-f420e681-4f10-4cbf-b5a0-c139f13cfb07.1601d8c45ad50b4a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-f420e681-4f10-4cbf-b5a0-c139f13cfb07.1601d8c48e208430], Reason = [Created], Message = [Created container filler-pod-f420e681-4f10-4cbf-b5a0-c139f13cfb07] STEP: Considering event: Type = [Normal], Name = [filler-pod-f420e681-4f10-4cbf-b5a0-c139f13cfb07.1601d8c49c62cdf0], Reason = [Started], Message = [Started container filler-pod-f420e681-4f10-4cbf-b5a0-c139f13cfb07] STEP: Considering event: Type = [Warning], Name = [additional-pod.1601d8c4d360afdf], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:21:24.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5965" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:5.317 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":275,"completed":152,"skipped":2456,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:21:24.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token STEP: reading a file in the container Apr 2 00:21:29.382: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-557 pod-service-account-05ccb464-625e-4654-926e-aa7dc2449578 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 2 00:21:29.601: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-557 pod-service-account-05ccb464-625e-4654-926e-aa7dc2449578 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 2 00:21:29.804: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-557 pod-service-account-05ccb464-625e-4654-926e-aa7dc2449578 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:21:30.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-557" for this suite. • [SLOW TEST:5.343 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":275,"completed":153,"skipped":2488,"failed":0} SS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:21:30.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:22:01.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7768" for this suite. STEP: Destroying namespace "nsdeletetest-3069" for this suite. Apr 2 00:22:01.703: INFO: Namespace nsdeletetest-3069 was already deleted STEP: Destroying namespace "nsdeletetest-8301" for this suite. • [SLOW TEST:31.607 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":154,"skipped":2490,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:22:01.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 2 00:22:01.759: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-5076' Apr 2 00:22:01.855: INFO: stderr: "" Apr 2 00:22:01.855: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Apr 2 00:22:06.906: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-5076 -o json' Apr 2 00:22:06.995: INFO: stderr: "" Apr 2 00:22:06.995: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-04-02T00:22:01Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-5076\",\n \"resourceVersion\": \"4671615\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-5076/pods/e2e-test-httpd-pod\",\n \"uid\": \"83b28f23-5760-44c5-9b92-d6f9ca2a1926\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-xhmlb\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-xhmlb\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-xhmlb\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-02T00:22:01Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-02T00:22:04Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-02T00:22:04Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-02T00:22:01Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://6c9a6bcc5fc9c4fb98a0750a18df5f4872eb8cb74beb387b43b6cd1f7871bcc3\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-04-02T00:22:04Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.13\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.20\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.20\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-04-02T00:22:01Z\"\n }\n}\n" STEP: replace the image in the pod Apr 2 00:22:06.995: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-5076' Apr 2 00:22:07.288: INFO: stderr: "" Apr 2 00:22:07.288: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Apr 2 00:22:07.301: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5076' Apr 2 00:22:10.785: INFO: stderr: "" Apr 2 00:22:10.785: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:22:10.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5076" for this suite. • [SLOW TEST:9.084 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":275,"completed":155,"skipped":2542,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:22:10.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 2 00:22:10.874: INFO: Waiting up to 5m0s for pod "pod-9d833c29-c4fb-4cfb-8921-6b7d2c9508f6" in namespace "emptydir-6468" to be "Succeeded or Failed" Apr 2 00:22:10.878: INFO: Pod "pod-9d833c29-c4fb-4cfb-8921-6b7d2c9508f6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.932551ms Apr 2 00:22:12.898: INFO: Pod "pod-9d833c29-c4fb-4cfb-8921-6b7d2c9508f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023901993s Apr 2 00:22:14.902: INFO: Pod "pod-9d833c29-c4fb-4cfb-8921-6b7d2c9508f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028285812s STEP: Saw pod success Apr 2 00:22:14.902: INFO: Pod "pod-9d833c29-c4fb-4cfb-8921-6b7d2c9508f6" satisfied condition "Succeeded or Failed" Apr 2 00:22:14.905: INFO: Trying to get logs from node latest-worker pod pod-9d833c29-c4fb-4cfb-8921-6b7d2c9508f6 container test-container: STEP: delete the pod Apr 2 00:22:14.924: INFO: Waiting for pod pod-9d833c29-c4fb-4cfb-8921-6b7d2c9508f6 to disappear Apr 2 00:22:14.935: INFO: Pod pod-9d833c29-c4fb-4cfb-8921-6b7d2c9508f6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:22:14.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6468" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":156,"skipped":2577,"failed":0} SSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:22:14.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-3a79318d-b68b-4464-8285-f799b2fc6f0a STEP: Creating secret with name s-test-opt-upd-64ea9b10-5777-4a9e-ab15-690117e73e93 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-3a79318d-b68b-4464-8285-f799b2fc6f0a STEP: Updating secret s-test-opt-upd-64ea9b10-5777-4a9e-ab15-690117e73e93 STEP: Creating secret with name s-test-opt-create-88063feb-4159-430e-97e7-569db3120d05 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:23:23.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-499" for this suite. • [SLOW TEST:68.507 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":157,"skipped":2582,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:23:23.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:23:23.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9893" for this suite. STEP: Destroying namespace "nspatchtest-e8523809-dbac-413d-8797-370f805cf0b9-5915" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":158,"skipped":2595,"failed":0} ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:23:23.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod test-webserver-400cdf1a-9f1f-4937-a207-194082a37600 in namespace container-probe-154 Apr 2 00:23:27.716: INFO: Started pod test-webserver-400cdf1a-9f1f-4937-a207-194082a37600 in namespace container-probe-154 STEP: checking the pod's current state and verifying that restartCount is present Apr 2 00:23:27.719: INFO: Initial restart count of pod test-webserver-400cdf1a-9f1f-4937-a207-194082a37600 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:27:28.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-154" for this suite. • [SLOW TEST:244.711 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":159,"skipped":2595,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:27:28.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-c6fc013f-5067-42ca-b81f-a1010a160944 STEP: Creating a pod to test consume configMaps Apr 2 00:27:28.707: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-70e54023-6aa3-4c18-8d21-5878df0752ea" in namespace "projected-6616" to be "Succeeded or Failed" Apr 2 00:27:28.717: INFO: Pod "pod-projected-configmaps-70e54023-6aa3-4c18-8d21-5878df0752ea": Phase="Pending", Reason="", readiness=false. Elapsed: 9.595118ms Apr 2 00:27:30.721: INFO: Pod "pod-projected-configmaps-70e54023-6aa3-4c18-8d21-5878df0752ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013795022s Apr 2 00:27:32.725: INFO: Pod "pod-projected-configmaps-70e54023-6aa3-4c18-8d21-5878df0752ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017837351s STEP: Saw pod success Apr 2 00:27:32.725: INFO: Pod "pod-projected-configmaps-70e54023-6aa3-4c18-8d21-5878df0752ea" satisfied condition "Succeeded or Failed" Apr 2 00:27:32.728: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-70e54023-6aa3-4c18-8d21-5878df0752ea container projected-configmap-volume-test: STEP: delete the pod Apr 2 00:27:32.771: INFO: Waiting for pod pod-projected-configmaps-70e54023-6aa3-4c18-8d21-5878df0752ea to disappear Apr 2 00:27:32.776: INFO: Pod pod-projected-configmaps-70e54023-6aa3-4c18-8d21-5878df0752ea no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:27:32.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6616" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":160,"skipped":2602,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:27:32.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating secret secrets-872/secret-test-0ec59775-4441-4fa9-8ff5-4ba9b2a92dd1 STEP: Creating a pod to test consume secrets Apr 2 00:27:32.859: INFO: Waiting up to 5m0s for pod "pod-configmaps-9b912ab4-498e-41d8-92d1-eb09ee62335c" in namespace "secrets-872" to be "Succeeded or Failed" Apr 2 00:27:32.866: INFO: Pod "pod-configmaps-9b912ab4-498e-41d8-92d1-eb09ee62335c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.243887ms Apr 2 00:27:34.889: INFO: Pod "pod-configmaps-9b912ab4-498e-41d8-92d1-eb09ee62335c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030062504s Apr 2 00:27:36.893: INFO: Pod "pod-configmaps-9b912ab4-498e-41d8-92d1-eb09ee62335c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033722238s STEP: Saw pod success Apr 2 00:27:36.893: INFO: Pod "pod-configmaps-9b912ab4-498e-41d8-92d1-eb09ee62335c" satisfied condition "Succeeded or Failed" Apr 2 00:27:36.896: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-9b912ab4-498e-41d8-92d1-eb09ee62335c container env-test: STEP: delete the pod Apr 2 00:27:36.940: INFO: Waiting for pod pod-configmaps-9b912ab4-498e-41d8-92d1-eb09ee62335c to disappear Apr 2 00:27:36.962: INFO: Pod pod-configmaps-9b912ab4-498e-41d8-92d1-eb09ee62335c no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:27:36.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-872" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":161,"skipped":2619,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:27:36.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Apr 2 00:27:37.036: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5738' Apr 2 00:27:39.896: INFO: stderr: "" Apr 2 00:27:39.896: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 2 00:27:39.897: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5738' Apr 2 00:27:40.014: INFO: stderr: "" Apr 2 00:27:40.014: INFO: stdout: "update-demo-nautilus-srtc9 update-demo-nautilus-vcnrj " Apr 2 00:27:40.014: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-srtc9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5738' Apr 2 00:27:40.107: INFO: stderr: "" Apr 2 00:27:40.107: INFO: stdout: "" Apr 2 00:27:40.107: INFO: update-demo-nautilus-srtc9 is created but not running Apr 2 00:27:45.107: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5738' Apr 2 00:27:45.195: INFO: stderr: "" Apr 2 00:27:45.195: INFO: stdout: "update-demo-nautilus-srtc9 update-demo-nautilus-vcnrj " Apr 2 00:27:45.196: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-srtc9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5738' Apr 2 00:27:45.283: INFO: stderr: "" Apr 2 00:27:45.283: INFO: stdout: "true" Apr 2 00:27:45.283: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-srtc9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5738' Apr 2 00:27:45.378: INFO: stderr: "" Apr 2 00:27:45.378: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 2 00:27:45.378: INFO: validating pod update-demo-nautilus-srtc9 Apr 2 00:27:45.383: INFO: got data: { "image": "nautilus.jpg" } Apr 2 00:27:45.383: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 2 00:27:45.383: INFO: update-demo-nautilus-srtc9 is verified up and running Apr 2 00:27:45.383: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vcnrj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5738' Apr 2 00:27:45.471: INFO: stderr: "" Apr 2 00:27:45.471: INFO: stdout: "true" Apr 2 00:27:45.471: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vcnrj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5738' Apr 2 00:27:45.551: INFO: stderr: "" Apr 2 00:27:45.551: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 2 00:27:45.551: INFO: validating pod update-demo-nautilus-vcnrj Apr 2 00:27:45.555: INFO: got data: { "image": "nautilus.jpg" } Apr 2 00:27:45.555: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 2 00:27:45.555: INFO: update-demo-nautilus-vcnrj is verified up and running STEP: using delete to clean up resources Apr 2 00:27:45.555: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5738' Apr 2 00:27:45.656: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 2 00:27:45.656: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 2 00:27:45.656: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5738' Apr 2 00:27:45.744: INFO: stderr: "No resources found in kubectl-5738 namespace.\n" Apr 2 00:27:45.744: INFO: stdout: "" Apr 2 00:27:45.744: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5738 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 2 00:27:45.833: INFO: stderr: "" Apr 2 00:27:45.833: INFO: stdout: "update-demo-nautilus-srtc9\nupdate-demo-nautilus-vcnrj\n" Apr 2 00:27:46.333: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5738' Apr 2 00:27:46.431: INFO: stderr: "No resources found in kubectl-5738 namespace.\n" Apr 2 00:27:46.431: INFO: stdout: "" Apr 2 00:27:46.431: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5738 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 2 00:27:46.602: INFO: stderr: "" Apr 2 00:27:46.602: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:27:46.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5738" for this suite. • [SLOW TEST:9.635 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":275,"completed":162,"skipped":2628,"failed":0} [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:27:46.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 00:27:46.752: INFO: Creating deployment "test-recreate-deployment" Apr 2 00:27:46.784: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 2 00:27:46.796: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 2 00:27:48.804: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 2 00:27:48.806: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721384066, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721384066, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721384066, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721384066, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-846c7dd955\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 2 00:27:50.810: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 2 00:27:50.815: INFO: Updating deployment test-recreate-deployment Apr 2 00:27:50.815: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 2 00:27:51.335: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-4666 /apis/apps/v1/namespaces/deployment-4666/deployments/test-recreate-deployment 5ec927a8-2f14-45a0-b1d8-d28e86ffbdbe 4672894 2 2020-04-02 00:27:46 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000532288 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-02 00:27:51 +0000 UTC,LastTransitionTime:2020-04-02 00:27:51 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-04-02 00:27:51 +0000 UTC,LastTransitionTime:2020-04-02 00:27:46 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Apr 2 00:27:51.340: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-4666 /apis/apps/v1/namespaces/deployment-4666/replicasets/test-recreate-deployment-5f94c574ff b3a6d5e6-4ea8-45ad-8a22-3ce487cdaa4c 4672891 1 2020-04-02 00:27:50 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 5ec927a8-2f14-45a0-b1d8-d28e86ffbdbe 0xc002a1fc17 0xc002a1fc18}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002a1fcc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 2 00:27:51.340: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 2 00:27:51.340: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-846c7dd955 deployment-4666 /apis/apps/v1/namespaces/deployment-4666/replicasets/test-recreate-deployment-846c7dd955 509857f4-590b-4490-bd1f-32c68ba12042 4672883 2 2020-04-02 00:27:46 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 5ec927a8-2f14-45a0-b1d8-d28e86ffbdbe 0xc002a1fd97 0xc002a1fd98}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 846c7dd955,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002a1fec8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 2 00:27:51.347: INFO: Pod "test-recreate-deployment-5f94c574ff-zxbs4" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-zxbs4 test-recreate-deployment-5f94c574ff- deployment-4666 /api/v1/namespaces/deployment-4666/pods/test-recreate-deployment-5f94c574ff-zxbs4 aea150e8-6ab7-4961-8ce8-2de9449b2209 4672895 0 2020-04-02 00:27:50 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff b3a6d5e6-4ea8-45ad-8a22-3ce487cdaa4c 0xc0029a6687 0xc0029a6688}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-85zb6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-85zb6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-85zb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:27:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:27:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:27:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:27:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-02 00:27:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:27:51.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4666" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":163,"skipped":2628,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:27:51.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Apr 2 00:27:51.492: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Apr 2 00:28:00.773: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:28:00.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2620" for this suite. • [SLOW TEST:9.430 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":164,"skipped":2649,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:28:00.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 2 00:28:01.316: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 2 00:28:03.327: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721384081, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721384081, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721384081, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721384081, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 2 00:28:06.396: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 00:28:06.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:28:07.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7736" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:6.945 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":165,"skipped":2682,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:28:07.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-secret-mg4c STEP: Creating a pod to test atomic-volume-subpath Apr 2 00:28:07.840: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-mg4c" in namespace "subpath-5146" to be "Succeeded or Failed" Apr 2 00:28:07.844: INFO: Pod "pod-subpath-test-secret-mg4c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035078ms Apr 2 00:28:09.866: INFO: Pod "pod-subpath-test-secret-mg4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026251042s Apr 2 00:28:11.871: INFO: Pod "pod-subpath-test-secret-mg4c": Phase="Running", Reason="", readiness=true. Elapsed: 4.030281825s Apr 2 00:28:13.884: INFO: Pod "pod-subpath-test-secret-mg4c": Phase="Running", Reason="", readiness=true. Elapsed: 6.044222411s Apr 2 00:28:15.888: INFO: Pod "pod-subpath-test-secret-mg4c": Phase="Running", Reason="", readiness=true. Elapsed: 8.047944265s Apr 2 00:28:17.897: INFO: Pod "pod-subpath-test-secret-mg4c": Phase="Running", Reason="", readiness=true. Elapsed: 10.056495395s Apr 2 00:28:19.900: INFO: Pod "pod-subpath-test-secret-mg4c": Phase="Running", Reason="", readiness=true. Elapsed: 12.059896269s Apr 2 00:28:21.904: INFO: Pod "pod-subpath-test-secret-mg4c": Phase="Running", Reason="", readiness=true. Elapsed: 14.063476114s Apr 2 00:28:23.908: INFO: Pod "pod-subpath-test-secret-mg4c": Phase="Running", Reason="", readiness=true. Elapsed: 16.067720813s Apr 2 00:28:25.912: INFO: Pod "pod-subpath-test-secret-mg4c": Phase="Running", Reason="", readiness=true. Elapsed: 18.072010328s Apr 2 00:28:27.916: INFO: Pod "pod-subpath-test-secret-mg4c": Phase="Running", Reason="", readiness=true. Elapsed: 20.076049793s Apr 2 00:28:29.921: INFO: Pod "pod-subpath-test-secret-mg4c": Phase="Running", Reason="", readiness=true. Elapsed: 22.080304814s Apr 2 00:28:31.925: INFO: Pod "pod-subpath-test-secret-mg4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.084557011s STEP: Saw pod success Apr 2 00:28:31.925: INFO: Pod "pod-subpath-test-secret-mg4c" satisfied condition "Succeeded or Failed" Apr 2 00:28:31.927: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-secret-mg4c container test-container-subpath-secret-mg4c: STEP: delete the pod Apr 2 00:28:31.960: INFO: Waiting for pod pod-subpath-test-secret-mg4c to disappear Apr 2 00:28:31.971: INFO: Pod pod-subpath-test-secret-mg4c no longer exists STEP: Deleting pod pod-subpath-test-secret-mg4c Apr 2 00:28:31.971: INFO: Deleting pod "pod-subpath-test-secret-mg4c" in namespace "subpath-5146" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:28:31.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5146" for this suite. • [SLOW TEST:24.251 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":166,"skipped":2736,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:28:31.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-f854d5cd-ff0b-4281-8f01-eaa8f62a0390 in namespace container-probe-878 Apr 2 00:28:36.082: INFO: Started pod liveness-f854d5cd-ff0b-4281-8f01-eaa8f62a0390 in namespace container-probe-878 STEP: checking the pod's current state and verifying that restartCount is present Apr 2 00:28:36.085: INFO: Initial restart count of pod liveness-f854d5cd-ff0b-4281-8f01-eaa8f62a0390 is 0 Apr 2 00:28:54.123: INFO: Restart count of pod container-probe-878/liveness-f854d5cd-ff0b-4281-8f01-eaa8f62a0390 is now 1 (18.038197274s elapsed) Apr 2 00:29:14.168: INFO: Restart count of pod container-probe-878/liveness-f854d5cd-ff0b-4281-8f01-eaa8f62a0390 is now 2 (38.083370908s elapsed) Apr 2 00:29:34.208: INFO: Restart count of pod container-probe-878/liveness-f854d5cd-ff0b-4281-8f01-eaa8f62a0390 is now 3 (58.123558664s elapsed) Apr 2 00:29:54.263: INFO: Restart count of pod container-probe-878/liveness-f854d5cd-ff0b-4281-8f01-eaa8f62a0390 is now 4 (1m18.178104186s elapsed) Apr 2 00:30:58.395: INFO: Restart count of pod container-probe-878/liveness-f854d5cd-ff0b-4281-8f01-eaa8f62a0390 is now 5 (2m22.310395245s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:30:58.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-878" for this suite. • [SLOW TEST:146.450 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":167,"skipped":2772,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:30:58.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 2 00:30:59.249: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 2 00:31:01.264: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721384259, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721384259, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721384259, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721384259, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 2 00:31:04.293: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 00:31:04.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4874-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:31:05.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-248" for this suite. STEP: Destroying namespace "webhook-248-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.134 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":168,"skipped":2796,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:31:05.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0402 00:31:17.291780 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 2 00:31:17.291: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:31:17.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5086" for this suite. • [SLOW TEST:11.731 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":169,"skipped":2820,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:31:17.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 2 00:31:17.419: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5934 /api/v1/namespaces/watch-5934/configmaps/e2e-watch-test-resource-version 7ae09e17-524c-421c-9840-da08f409eaae 4673912 0 2020-04-02 00:31:17 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 2 00:31:17.419: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5934 /api/v1/namespaces/watch-5934/configmaps/e2e-watch-test-resource-version 7ae09e17-524c-421c-9840-da08f409eaae 4673913 0 2020-04-02 00:31:17 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:31:17.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5934" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":170,"skipped":2840,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:31:17.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-6869 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6869 STEP: creating replication controller externalsvc in namespace services-6869 I0402 00:31:17.573627 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-6869, replica count: 2 I0402 00:31:20.624192 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0402 00:31:23.624414 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Apr 2 00:31:23.744: INFO: Creating new exec pod Apr 2 00:31:27.775: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-6869 execpodq749f -- /bin/sh -x -c nslookup clusterip-service' Apr 2 00:31:27.981: INFO: stderr: "I0402 00:31:27.902351 2363 log.go:172] (0xc00003a840) (0xc0003cebe0) Create stream\nI0402 00:31:27.902415 2363 log.go:172] (0xc00003a840) (0xc0003cebe0) Stream added, broadcasting: 1\nI0402 00:31:27.905299 2363 log.go:172] (0xc00003a840) Reply frame received for 1\nI0402 00:31:27.905349 2363 log.go:172] (0xc00003a840) (0xc0007b7360) Create stream\nI0402 00:31:27.905366 2363 log.go:172] (0xc00003a840) (0xc0007b7360) Stream added, broadcasting: 3\nI0402 00:31:27.906463 2363 log.go:172] (0xc00003a840) Reply frame received for 3\nI0402 00:31:27.906516 2363 log.go:172] (0xc00003a840) (0xc0009e4000) Create stream\nI0402 00:31:27.906532 2363 log.go:172] (0xc00003a840) (0xc0009e4000) Stream added, broadcasting: 5\nI0402 00:31:27.907789 2363 log.go:172] (0xc00003a840) Reply frame received for 5\nI0402 00:31:27.964470 2363 log.go:172] (0xc00003a840) Data frame received for 5\nI0402 00:31:27.964496 2363 log.go:172] (0xc0009e4000) (5) Data frame handling\nI0402 00:31:27.964512 2363 log.go:172] (0xc0009e4000) (5) Data frame sent\n+ nslookup clusterip-service\nI0402 00:31:27.972964 2363 log.go:172] (0xc00003a840) Data frame received for 3\nI0402 00:31:27.972991 2363 log.go:172] (0xc0007b7360) (3) Data frame handling\nI0402 00:31:27.973014 2363 log.go:172] (0xc0007b7360) (3) Data frame sent\nI0402 00:31:27.974256 2363 log.go:172] (0xc00003a840) Data frame received for 3\nI0402 00:31:27.974296 2363 log.go:172] (0xc0007b7360) (3) Data frame handling\nI0402 00:31:27.974321 2363 log.go:172] (0xc0007b7360) (3) Data frame sent\nI0402 00:31:27.974675 2363 log.go:172] (0xc00003a840) Data frame received for 3\nI0402 00:31:27.974695 2363 log.go:172] (0xc0007b7360) (3) Data frame handling\nI0402 00:31:27.974716 2363 log.go:172] (0xc00003a840) Data frame received for 5\nI0402 00:31:27.974726 2363 log.go:172] (0xc0009e4000) (5) Data frame handling\nI0402 00:31:27.976365 2363 log.go:172] (0xc00003a840) Data frame received for 1\nI0402 00:31:27.976395 2363 log.go:172] (0xc0003cebe0) (1) Data frame handling\nI0402 00:31:27.976421 2363 log.go:172] (0xc0003cebe0) (1) Data frame sent\nI0402 00:31:27.976437 2363 log.go:172] (0xc00003a840) (0xc0003cebe0) Stream removed, broadcasting: 1\nI0402 00:31:27.976463 2363 log.go:172] (0xc00003a840) Go away received\nI0402 00:31:27.976846 2363 log.go:172] (0xc00003a840) (0xc0003cebe0) Stream removed, broadcasting: 1\nI0402 00:31:27.976863 2363 log.go:172] (0xc00003a840) (0xc0007b7360) Stream removed, broadcasting: 3\nI0402 00:31:27.976871 2363 log.go:172] (0xc00003a840) (0xc0009e4000) Stream removed, broadcasting: 5\n" Apr 2 00:31:27.981: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-6869.svc.cluster.local\tcanonical name = externalsvc.services-6869.svc.cluster.local.\nName:\texternalsvc.services-6869.svc.cluster.local\nAddress: 10.96.115.100\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6869, will wait for the garbage collector to delete the pods Apr 2 00:31:28.041: INFO: Deleting ReplicationController externalsvc took: 6.028568ms Apr 2 00:31:28.141: INFO: Terminating ReplicationController externalsvc pods took: 100.377592ms Apr 2 00:31:43.076: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:31:43.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6869" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:25.683 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":171,"skipped":2922,"failed":0} [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:31:43.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 2 00:31:43.185: INFO: Waiting up to 5m0s for pod "pod-0e6eda43-863a-4af5-955f-b825947f905d" in namespace "emptydir-4403" to be "Succeeded or Failed" Apr 2 00:31:43.203: INFO: Pod "pod-0e6eda43-863a-4af5-955f-b825947f905d": Phase="Pending", Reason="", readiness=false. Elapsed: 17.645849ms Apr 2 00:31:45.207: INFO: Pod "pod-0e6eda43-863a-4af5-955f-b825947f905d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022305598s Apr 2 00:31:47.212: INFO: Pod "pod-0e6eda43-863a-4af5-955f-b825947f905d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026657503s STEP: Saw pod success Apr 2 00:31:47.212: INFO: Pod "pod-0e6eda43-863a-4af5-955f-b825947f905d" satisfied condition "Succeeded or Failed" Apr 2 00:31:47.215: INFO: Trying to get logs from node latest-worker pod pod-0e6eda43-863a-4af5-955f-b825947f905d container test-container: STEP: delete the pod Apr 2 00:31:47.265: INFO: Waiting for pod pod-0e6eda43-863a-4af5-955f-b825947f905d to disappear Apr 2 00:31:47.274: INFO: Pod pod-0e6eda43-863a-4af5-955f-b825947f905d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:31:47.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4403" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":172,"skipped":2922,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:31:47.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-1866 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 2 00:31:47.320: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 2 00:31:47.391: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 2 00:31:49.463: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 2 00:31:51.395: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 2 00:31:53.403: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 2 00:31:55.397: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 2 00:31:57.395: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 2 00:31:59.395: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 2 00:32:01.395: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 2 00:32:01.402: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 2 00:32:03.406: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 2 00:32:05.406: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 2 00:32:07.406: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 2 00:32:11.439: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.36:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1866 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 00:32:11.439: INFO: >>> kubeConfig: /root/.kube/config I0402 00:32:11.475897 7 log.go:172] (0xc002c3cf20) (0xc0002c4d20) Create stream I0402 00:32:11.475935 7 log.go:172] (0xc002c3cf20) (0xc0002c4d20) Stream added, broadcasting: 1 I0402 00:32:11.477983 7 log.go:172] (0xc002c3cf20) Reply frame received for 1 I0402 00:32:11.478024 7 log.go:172] (0xc002c3cf20) (0xc001708aa0) Create stream I0402 00:32:11.478036 7 log.go:172] (0xc002c3cf20) (0xc001708aa0) Stream added, broadcasting: 3 I0402 00:32:11.478877 7 log.go:172] (0xc002c3cf20) Reply frame received for 3 I0402 00:32:11.478904 7 log.go:172] (0xc002c3cf20) (0xc001708b40) Create stream I0402 00:32:11.478915 7 log.go:172] (0xc002c3cf20) (0xc001708b40) Stream added, broadcasting: 5 I0402 00:32:11.479660 7 log.go:172] (0xc002c3cf20) Reply frame received for 5 I0402 00:32:11.560080 7 log.go:172] (0xc002c3cf20) Data frame received for 5 I0402 00:32:11.560119 7 log.go:172] (0xc001708b40) (5) Data frame handling I0402 00:32:11.560149 7 log.go:172] (0xc002c3cf20) Data frame received for 3 I0402 00:32:11.560178 7 log.go:172] (0xc001708aa0) (3) Data frame handling I0402 00:32:11.560200 7 log.go:172] (0xc001708aa0) (3) Data frame sent I0402 00:32:11.560214 7 log.go:172] (0xc002c3cf20) Data frame received for 3 I0402 00:32:11.560263 7 log.go:172] (0xc001708aa0) (3) Data frame handling I0402 00:32:11.562109 7 log.go:172] (0xc002c3cf20) Data frame received for 1 I0402 00:32:11.562141 7 log.go:172] (0xc0002c4d20) (1) Data frame handling I0402 00:32:11.562164 7 log.go:172] (0xc0002c4d20) (1) Data frame sent I0402 00:32:11.562199 7 log.go:172] (0xc002c3cf20) (0xc0002c4d20) Stream removed, broadcasting: 1 I0402 00:32:11.562228 7 log.go:172] (0xc002c3cf20) Go away received I0402 00:32:11.562399 7 log.go:172] (0xc002c3cf20) (0xc0002c4d20) Stream removed, broadcasting: 1 I0402 00:32:11.562435 7 log.go:172] (0xc002c3cf20) (0xc001708aa0) Stream removed, broadcasting: 3 I0402 00:32:11.562460 7 log.go:172] (0xc002c3cf20) (0xc001708b40) Stream removed, broadcasting: 5 Apr 2 00:32:11.562: INFO: Found all expected endpoints: [netserver-0] Apr 2 00:32:11.566: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.142:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1866 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 00:32:11.566: INFO: >>> kubeConfig: /root/.kube/config I0402 00:32:11.599314 7 log.go:172] (0xc00288a8f0) (0xc001312d20) Create stream I0402 00:32:11.599357 7 log.go:172] (0xc00288a8f0) (0xc001312d20) Stream added, broadcasting: 1 I0402 00:32:11.602354 7 log.go:172] (0xc00288a8f0) Reply frame received for 1 I0402 00:32:11.602393 7 log.go:172] (0xc00288a8f0) (0xc001197360) Create stream I0402 00:32:11.602409 7 log.go:172] (0xc00288a8f0) (0xc001197360) Stream added, broadcasting: 3 I0402 00:32:11.603592 7 log.go:172] (0xc00288a8f0) Reply frame received for 3 I0402 00:32:11.603640 7 log.go:172] (0xc00288a8f0) (0xc0002c54a0) Create stream I0402 00:32:11.603665 7 log.go:172] (0xc00288a8f0) (0xc0002c54a0) Stream added, broadcasting: 5 I0402 00:32:11.605032 7 log.go:172] (0xc00288a8f0) Reply frame received for 5 I0402 00:32:11.672575 7 log.go:172] (0xc00288a8f0) Data frame received for 3 I0402 00:32:11.672612 7 log.go:172] (0xc001197360) (3) Data frame handling I0402 00:32:11.672627 7 log.go:172] (0xc001197360) (3) Data frame sent I0402 00:32:11.672632 7 log.go:172] (0xc00288a8f0) Data frame received for 3 I0402 00:32:11.672656 7 log.go:172] (0xc001197360) (3) Data frame handling I0402 00:32:11.672796 7 log.go:172] (0xc00288a8f0) Data frame received for 5 I0402 00:32:11.672825 7 log.go:172] (0xc0002c54a0) (5) Data frame handling I0402 00:32:11.674534 7 log.go:172] (0xc00288a8f0) Data frame received for 1 I0402 00:32:11.674560 7 log.go:172] (0xc001312d20) (1) Data frame handling I0402 00:32:11.674596 7 log.go:172] (0xc001312d20) (1) Data frame sent I0402 00:32:11.674622 7 log.go:172] (0xc00288a8f0) (0xc001312d20) Stream removed, broadcasting: 1 I0402 00:32:11.674669 7 log.go:172] (0xc00288a8f0) Go away received I0402 00:32:11.674733 7 log.go:172] (0xc00288a8f0) (0xc001312d20) Stream removed, broadcasting: 1 I0402 00:32:11.674755 7 log.go:172] (0xc00288a8f0) (0xc001197360) Stream removed, broadcasting: 3 I0402 00:32:11.674776 7 log.go:172] (0xc00288a8f0) (0xc0002c54a0) Stream removed, broadcasting: 5 Apr 2 00:32:11.674: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:32:11.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1866" for this suite. • [SLOW TEST:24.401 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":173,"skipped":2932,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:32:11.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 00:32:11.747: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-6744 I0402 00:32:11.771366 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-6744, replica count: 1 I0402 00:32:12.821766 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0402 00:32:13.821965 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0402 00:32:14.822207 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 2 00:32:14.960: INFO: Created: latency-svc-tgf7t Apr 2 00:32:14.971: INFO: Got endpoints: latency-svc-tgf7t [48.76003ms] Apr 2 00:32:14.990: INFO: Created: latency-svc-g5vbb Apr 2 00:32:14.999: INFO: Got endpoints: latency-svc-g5vbb [28.645178ms] Apr 2 00:32:15.026: INFO: Created: latency-svc-prw2g Apr 2 00:32:15.047: INFO: Got endpoints: latency-svc-prw2g [75.715699ms] Apr 2 00:32:15.110: INFO: Created: latency-svc-cfkjm Apr 2 00:32:15.140: INFO: Got endpoints: latency-svc-cfkjm [168.856424ms] Apr 2 00:32:15.141: INFO: Created: latency-svc-r5p29 Apr 2 00:32:15.166: INFO: Got endpoints: latency-svc-r5p29 [195.0819ms] Apr 2 00:32:15.188: INFO: Created: latency-svc-pgh9w Apr 2 00:32:15.203: INFO: Got endpoints: latency-svc-pgh9w [232.328454ms] Apr 2 00:32:15.247: INFO: Created: latency-svc-gnsnw Apr 2 00:32:15.263: INFO: Got endpoints: latency-svc-gnsnw [291.213825ms] Apr 2 00:32:15.263: INFO: Created: latency-svc-wqv8v Apr 2 00:32:15.275: INFO: Got endpoints: latency-svc-wqv8v [303.742593ms] Apr 2 00:32:15.293: INFO: Created: latency-svc-5vsg8 Apr 2 00:32:15.305: INFO: Got endpoints: latency-svc-5vsg8 [333.742768ms] Apr 2 00:32:15.326: INFO: Created: latency-svc-5lw7s Apr 2 00:32:15.403: INFO: Got endpoints: latency-svc-5lw7s [431.457623ms] Apr 2 00:32:15.406: INFO: Created: latency-svc-t6hrw Apr 2 00:32:15.426: INFO: Got endpoints: latency-svc-t6hrw [454.808876ms] Apr 2 00:32:15.428: INFO: Created: latency-svc-8dk69 Apr 2 00:32:15.437: INFO: Got endpoints: latency-svc-8dk69 [465.583424ms] Apr 2 00:32:15.541: INFO: Created: latency-svc-5xz2h Apr 2 00:32:15.572: INFO: Got endpoints: latency-svc-5xz2h [601.060774ms] Apr 2 00:32:15.572: INFO: Created: latency-svc-mx4xw Apr 2 00:32:15.593: INFO: Got endpoints: latency-svc-mx4xw [621.152937ms] Apr 2 00:32:15.627: INFO: Created: latency-svc-w4hv2 Apr 2 00:32:15.678: INFO: Got endpoints: latency-svc-w4hv2 [707.191234ms] Apr 2 00:32:15.679: INFO: Created: latency-svc-slfw4 Apr 2 00:32:15.682: INFO: Got endpoints: latency-svc-slfw4 [710.979107ms] Apr 2 00:32:15.701: INFO: Created: latency-svc-ccn2z Apr 2 00:32:15.720: INFO: Got endpoints: latency-svc-ccn2z [719.965229ms] Apr 2 00:32:15.746: INFO: Created: latency-svc-492n5 Apr 2 00:32:15.770: INFO: Got endpoints: latency-svc-492n5 [723.891343ms] Apr 2 00:32:15.818: INFO: Created: latency-svc-9w25q Apr 2 00:32:15.832: INFO: Got endpoints: latency-svc-9w25q [691.725944ms] Apr 2 00:32:15.856: INFO: Created: latency-svc-fp9lz Apr 2 00:32:15.874: INFO: Got endpoints: latency-svc-fp9lz [707.669387ms] Apr 2 00:32:15.899: INFO: Created: latency-svc-jscq8 Apr 2 00:32:15.930: INFO: Got endpoints: latency-svc-jscq8 [726.772681ms] Apr 2 00:32:15.944: INFO: Created: latency-svc-bm8s9 Apr 2 00:32:15.958: INFO: Got endpoints: latency-svc-bm8s9 [695.131894ms] Apr 2 00:32:15.976: INFO: Created: latency-svc-ltphn Apr 2 00:32:15.988: INFO: Got endpoints: latency-svc-ltphn [712.970165ms] Apr 2 00:32:16.013: INFO: Created: latency-svc-6slxd Apr 2 00:32:16.069: INFO: Got endpoints: latency-svc-6slxd [763.979531ms] Apr 2 00:32:16.069: INFO: Created: latency-svc-57bhd Apr 2 00:32:16.095: INFO: Got endpoints: latency-svc-57bhd [691.61652ms] Apr 2 00:32:16.095: INFO: Created: latency-svc-m9smh Apr 2 00:32:16.120: INFO: Got endpoints: latency-svc-m9smh [694.262149ms] Apr 2 00:32:16.148: INFO: Created: latency-svc-hsxcp Apr 2 00:32:16.162: INFO: Got endpoints: latency-svc-hsxcp [93.443071ms] Apr 2 00:32:16.202: INFO: Created: latency-svc-wvk7l Apr 2 00:32:16.216: INFO: Got endpoints: latency-svc-wvk7l [778.682388ms] Apr 2 00:32:16.247: INFO: Created: latency-svc-fq54k Apr 2 00:32:16.258: INFO: Got endpoints: latency-svc-fq54k [685.734533ms] Apr 2 00:32:16.319: INFO: Created: latency-svc-5tbhr Apr 2 00:32:16.324: INFO: Got endpoints: latency-svc-5tbhr [730.965991ms] Apr 2 00:32:16.340: INFO: Created: latency-svc-9r9vf Apr 2 00:32:16.354: INFO: Got endpoints: latency-svc-9r9vf [675.62743ms] Apr 2 00:32:16.370: INFO: Created: latency-svc-w9frq Apr 2 00:32:16.384: INFO: Got endpoints: latency-svc-w9frq [701.246802ms] Apr 2 00:32:16.415: INFO: Created: latency-svc-rkb4h Apr 2 00:32:16.451: INFO: Got endpoints: latency-svc-rkb4h [731.433274ms] Apr 2 00:32:16.470: INFO: Created: latency-svc-4xxmn Apr 2 00:32:16.502: INFO: Got endpoints: latency-svc-4xxmn [731.444888ms] Apr 2 00:32:16.528: INFO: Created: latency-svc-cqlmw Apr 2 00:32:16.539: INFO: Got endpoints: latency-svc-cqlmw [706.741231ms] Apr 2 00:32:16.589: INFO: Created: latency-svc-q4m7c Apr 2 00:32:16.607: INFO: Got endpoints: latency-svc-q4m7c [732.335142ms] Apr 2 00:32:16.607: INFO: Created: latency-svc-mbvr4 Apr 2 00:32:16.624: INFO: Got endpoints: latency-svc-mbvr4 [693.555751ms] Apr 2 00:32:16.643: INFO: Created: latency-svc-mw8gq Apr 2 00:32:16.659: INFO: Got endpoints: latency-svc-mw8gq [700.917546ms] Apr 2 00:32:16.678: INFO: Created: latency-svc-hsqwp Apr 2 00:32:16.726: INFO: Got endpoints: latency-svc-hsqwp [738.678819ms] Apr 2 00:32:16.742: INFO: Created: latency-svc-6jtqq Apr 2 00:32:16.754: INFO: Got endpoints: latency-svc-6jtqq [659.714397ms] Apr 2 00:32:16.772: INFO: Created: latency-svc-dhzcw Apr 2 00:32:16.785: INFO: Got endpoints: latency-svc-dhzcw [664.568416ms] Apr 2 00:32:16.811: INFO: Created: latency-svc-6cx2d Apr 2 00:32:16.882: INFO: Got endpoints: latency-svc-6cx2d [720.061082ms] Apr 2 00:32:16.884: INFO: Created: latency-svc-dvpx2 Apr 2 00:32:16.904: INFO: Got endpoints: latency-svc-dvpx2 [688.162208ms] Apr 2 00:32:16.905: INFO: Created: latency-svc-wsxx8 Apr 2 00:32:16.928: INFO: Got endpoints: latency-svc-wsxx8 [670.019623ms] Apr 2 00:32:16.952: INFO: Created: latency-svc-gk8vv Apr 2 00:32:16.964: INFO: Got endpoints: latency-svc-gk8vv [640.42248ms] Apr 2 00:32:16.979: INFO: Created: latency-svc-f7hmn Apr 2 00:32:17.033: INFO: Got endpoints: latency-svc-f7hmn [678.567005ms] Apr 2 00:32:17.038: INFO: Created: latency-svc-h2mz6 Apr 2 00:32:17.049: INFO: Got endpoints: latency-svc-h2mz6 [664.714386ms] Apr 2 00:32:17.072: INFO: Created: latency-svc-vzfc5 Apr 2 00:32:17.084: INFO: Got endpoints: latency-svc-vzfc5 [632.735624ms] Apr 2 00:32:17.102: INFO: Created: latency-svc-qs2l2 Apr 2 00:32:17.114: INFO: Got endpoints: latency-svc-qs2l2 [612.052811ms] Apr 2 00:32:17.170: INFO: Created: latency-svc-wtmvj Apr 2 00:32:17.195: INFO: Got endpoints: latency-svc-wtmvj [655.721886ms] Apr 2 00:32:17.196: INFO: Created: latency-svc-8z8tx Apr 2 00:32:17.210: INFO: Got endpoints: latency-svc-8z8tx [602.996536ms] Apr 2 00:32:17.240: INFO: Created: latency-svc-tw8sz Apr 2 00:32:17.289: INFO: Got endpoints: latency-svc-tw8sz [665.582601ms] Apr 2 00:32:17.300: INFO: Created: latency-svc-np6qt Apr 2 00:32:17.312: INFO: Got endpoints: latency-svc-np6qt [652.842923ms] Apr 2 00:32:17.345: INFO: Created: latency-svc-cptpx Apr 2 00:32:17.360: INFO: Got endpoints: latency-svc-cptpx [633.368274ms] Apr 2 00:32:17.381: INFO: Created: latency-svc-xk8cj Apr 2 00:32:17.445: INFO: Got endpoints: latency-svc-xk8cj [690.419622ms] Apr 2 00:32:17.446: INFO: Created: latency-svc-7vg7m Apr 2 00:32:17.449: INFO: Got endpoints: latency-svc-7vg7m [664.478439ms] Apr 2 00:32:17.468: INFO: Created: latency-svc-mcpgz Apr 2 00:32:17.486: INFO: Got endpoints: latency-svc-mcpgz [603.269314ms] Apr 2 00:32:17.512: INFO: Created: latency-svc-t5wl4 Apr 2 00:32:17.534: INFO: Got endpoints: latency-svc-t5wl4 [629.535966ms] Apr 2 00:32:17.597: INFO: Created: latency-svc-mllb9 Apr 2 00:32:17.606: INFO: Got endpoints: latency-svc-mllb9 [677.339801ms] Apr 2 00:32:17.624: INFO: Created: latency-svc-vv84w Apr 2 00:32:17.642: INFO: Got endpoints: latency-svc-vv84w [677.610734ms] Apr 2 00:32:17.661: INFO: Created: latency-svc-s9ktj Apr 2 00:32:17.720: INFO: Got endpoints: latency-svc-s9ktj [687.617728ms] Apr 2 00:32:17.741: INFO: Created: latency-svc-fllhr Apr 2 00:32:17.759: INFO: Got endpoints: latency-svc-fllhr [709.974088ms] Apr 2 00:32:17.795: INFO: Created: latency-svc-vfpq8 Apr 2 00:32:17.815: INFO: Got endpoints: latency-svc-vfpq8 [731.408398ms] Apr 2 00:32:17.865: INFO: Created: latency-svc-fbvxs Apr 2 00:32:17.883: INFO: Created: latency-svc-dthx2 Apr 2 00:32:17.883: INFO: Got endpoints: latency-svc-fbvxs [768.524056ms] Apr 2 00:32:17.893: INFO: Got endpoints: latency-svc-dthx2 [698.41872ms] Apr 2 00:32:17.906: INFO: Created: latency-svc-4kc8s Apr 2 00:32:17.929: INFO: Got endpoints: latency-svc-4kc8s [719.678133ms] Apr 2 00:32:17.951: INFO: Created: latency-svc-w9bsn Apr 2 00:32:17.984: INFO: Got endpoints: latency-svc-w9bsn [694.246784ms] Apr 2 00:32:18.005: INFO: Created: latency-svc-7wzqd Apr 2 00:32:18.019: INFO: Got endpoints: latency-svc-7wzqd [707.351906ms] Apr 2 00:32:18.068: INFO: Created: latency-svc-ncjbz Apr 2 00:32:18.104: INFO: Got endpoints: latency-svc-ncjbz [744.425777ms] Apr 2 00:32:18.122: INFO: Created: latency-svc-tbzj4 Apr 2 00:32:18.139: INFO: Got endpoints: latency-svc-tbzj4 [693.706021ms] Apr 2 00:32:18.179: INFO: Created: latency-svc-nkw6j Apr 2 00:32:18.204: INFO: Got endpoints: latency-svc-nkw6j [754.120643ms] Apr 2 00:32:18.230: INFO: Created: latency-svc-295dv Apr 2 00:32:18.253: INFO: Got endpoints: latency-svc-295dv [767.052634ms] Apr 2 00:32:18.285: INFO: Created: latency-svc-74rjt Apr 2 00:32:18.361: INFO: Got endpoints: latency-svc-74rjt [827.228467ms] Apr 2 00:32:18.396: INFO: Created: latency-svc-pftrr Apr 2 00:32:18.415: INFO: Got endpoints: latency-svc-pftrr [809.068798ms] Apr 2 00:32:18.448: INFO: Created: latency-svc-6jtlg Apr 2 00:32:18.481: INFO: Got endpoints: latency-svc-6jtlg [838.401055ms] Apr 2 00:32:18.495: INFO: Created: latency-svc-4kbkm Apr 2 00:32:18.504: INFO: Got endpoints: latency-svc-4kbkm [783.723517ms] Apr 2 00:32:18.533: INFO: Created: latency-svc-7jwjx Apr 2 00:32:18.546: INFO: Got endpoints: latency-svc-7jwjx [787.313654ms] Apr 2 00:32:18.569: INFO: Created: latency-svc-txb7c Apr 2 00:32:18.600: INFO: Got endpoints: latency-svc-txb7c [785.196506ms] Apr 2 00:32:18.639: INFO: Created: latency-svc-4849t Apr 2 00:32:18.648: INFO: Got endpoints: latency-svc-4849t [765.148814ms] Apr 2 00:32:18.669: INFO: Created: latency-svc-g569g Apr 2 00:32:18.678: INFO: Got endpoints: latency-svc-g569g [784.763601ms] Apr 2 00:32:18.743: INFO: Created: latency-svc-8xjlb Apr 2 00:32:18.756: INFO: Got endpoints: latency-svc-8xjlb [826.888148ms] Apr 2 00:32:18.783: INFO: Created: latency-svc-zg88j Apr 2 00:32:18.802: INFO: Got endpoints: latency-svc-zg88j [818.15243ms] Apr 2 00:32:18.824: INFO: Created: latency-svc-qpgkr Apr 2 00:32:18.894: INFO: Got endpoints: latency-svc-qpgkr [874.982119ms] Apr 2 00:32:18.905: INFO: Created: latency-svc-52x68 Apr 2 00:32:18.918: INFO: Got endpoints: latency-svc-52x68 [813.518191ms] Apr 2 00:32:18.935: INFO: Created: latency-svc-mgm9l Apr 2 00:32:18.948: INFO: Got endpoints: latency-svc-mgm9l [809.282486ms] Apr 2 00:32:18.965: INFO: Created: latency-svc-558fh Apr 2 00:32:18.992: INFO: Got endpoints: latency-svc-558fh [788.477335ms] Apr 2 00:32:19.048: INFO: Created: latency-svc-wsh2v Apr 2 00:32:19.061: INFO: Got endpoints: latency-svc-wsh2v [808.331952ms] Apr 2 00:32:19.103: INFO: Created: latency-svc-7gt79 Apr 2 00:32:19.116: INFO: Got endpoints: latency-svc-7gt79 [754.780399ms] Apr 2 00:32:19.166: INFO: Created: latency-svc-nr8dl Apr 2 00:32:19.197: INFO: Got endpoints: latency-svc-nr8dl [782.251514ms] Apr 2 00:32:19.197: INFO: Created: latency-svc-wgmzg Apr 2 00:32:19.211: INFO: Got endpoints: latency-svc-wgmzg [730.11824ms] Apr 2 00:32:19.232: INFO: Created: latency-svc-s29jp Apr 2 00:32:19.241: INFO: Got endpoints: latency-svc-s29jp [736.615431ms] Apr 2 00:32:19.333: INFO: Created: latency-svc-cdc4h Apr 2 00:32:19.578: INFO: Got endpoints: latency-svc-cdc4h [1.031929665s] Apr 2 00:32:19.582: INFO: Created: latency-svc-p7xbw Apr 2 00:32:19.607: INFO: Got endpoints: latency-svc-p7xbw [1.006221288s] Apr 2 00:32:19.639: INFO: Created: latency-svc-24k6c Apr 2 00:32:19.665: INFO: Got endpoints: latency-svc-24k6c [1.017119817s] Apr 2 00:32:19.769: INFO: Created: latency-svc-vbhrl Apr 2 00:32:19.888: INFO: Got endpoints: latency-svc-vbhrl [1.209566321s] Apr 2 00:32:20.008: INFO: Created: latency-svc-mmnl2 Apr 2 00:32:20.014: INFO: Got endpoints: latency-svc-mmnl2 [1.257648594s] Apr 2 00:32:20.052: INFO: Created: latency-svc-6xvv4 Apr 2 00:32:20.104: INFO: Got endpoints: latency-svc-6xvv4 [1.302399682s] Apr 2 00:32:20.279: INFO: Created: latency-svc-r7rtt Apr 2 00:32:20.487: INFO: Got endpoints: latency-svc-r7rtt [1.593357867s] Apr 2 00:32:20.751: INFO: Created: latency-svc-rdxpr Apr 2 00:32:20.843: INFO: Got endpoints: latency-svc-rdxpr [1.924668039s] Apr 2 00:32:20.843: INFO: Created: latency-svc-654fx Apr 2 00:32:20.846: INFO: Got endpoints: latency-svc-654fx [1.898110772s] Apr 2 00:32:20.884: INFO: Created: latency-svc-ndftt Apr 2 00:32:20.912: INFO: Got endpoints: latency-svc-ndftt [1.92003003s] Apr 2 00:32:21.063: INFO: Created: latency-svc-fpgp8 Apr 2 00:32:21.200: INFO: Got endpoints: latency-svc-fpgp8 [2.138412577s] Apr 2 00:32:21.217: INFO: Created: latency-svc-cgbct Apr 2 00:32:21.229: INFO: Got endpoints: latency-svc-cgbct [2.113548109s] Apr 2 00:32:21.266: INFO: Created: latency-svc-hkm9c Apr 2 00:32:21.283: INFO: Got endpoints: latency-svc-hkm9c [2.086209509s] Apr 2 00:32:21.361: INFO: Created: latency-svc-prb29 Apr 2 00:32:21.422: INFO: Created: latency-svc-f7jx9 Apr 2 00:32:21.422: INFO: Got endpoints: latency-svc-prb29 [2.211633477s] Apr 2 00:32:21.506: INFO: Got endpoints: latency-svc-f7jx9 [2.264812282s] Apr 2 00:32:21.536: INFO: Created: latency-svc-5xchd Apr 2 00:32:21.559: INFO: Got endpoints: latency-svc-5xchd [1.981067128s] Apr 2 00:32:21.589: INFO: Created: latency-svc-cdb88 Apr 2 00:32:21.637: INFO: Got endpoints: latency-svc-cdb88 [2.030091036s] Apr 2 00:32:21.654: INFO: Created: latency-svc-nwvv5 Apr 2 00:32:21.715: INFO: Got endpoints: latency-svc-nwvv5 [2.049859463s] Apr 2 00:32:21.787: INFO: Created: latency-svc-r8nbg Apr 2 00:32:21.793: INFO: Got endpoints: latency-svc-r8nbg [1.9053522s] Apr 2 00:32:21.810: INFO: Created: latency-svc-d45fc Apr 2 00:32:21.822: INFO: Got endpoints: latency-svc-d45fc [1.808314234s] Apr 2 00:32:21.862: INFO: Created: latency-svc-59nrn Apr 2 00:32:21.877: INFO: Got endpoints: latency-svc-59nrn [1.772284089s] Apr 2 00:32:21.919: INFO: Created: latency-svc-7wp99 Apr 2 00:32:21.942: INFO: Got endpoints: latency-svc-7wp99 [1.454667119s] Apr 2 00:32:21.967: INFO: Created: latency-svc-768r5 Apr 2 00:32:21.997: INFO: Got endpoints: latency-svc-768r5 [1.154138463s] Apr 2 00:32:22.069: INFO: Created: latency-svc-v9ks5 Apr 2 00:32:22.092: INFO: Got endpoints: latency-svc-v9ks5 [1.246126263s] Apr 2 00:32:22.260: INFO: Created: latency-svc-rlblf Apr 2 00:32:22.315: INFO: Got endpoints: latency-svc-rlblf [1.40267153s] Apr 2 00:32:22.315: INFO: Created: latency-svc-kq2lv Apr 2 00:32:22.415: INFO: Got endpoints: latency-svc-kq2lv [1.21535462s] Apr 2 00:32:22.425: INFO: Created: latency-svc-gqwrs Apr 2 00:32:22.440: INFO: Got endpoints: latency-svc-gqwrs [1.210385949s] Apr 2 00:32:22.468: INFO: Created: latency-svc-dx9f7 Apr 2 00:32:22.482: INFO: Got endpoints: latency-svc-dx9f7 [1.198563538s] Apr 2 00:32:22.577: INFO: Created: latency-svc-khw2b Apr 2 00:32:22.606: INFO: Got endpoints: latency-svc-khw2b [1.183542643s] Apr 2 00:32:22.657: INFO: Created: latency-svc-6crpz Apr 2 00:32:22.674: INFO: Got endpoints: latency-svc-6crpz [1.168070546s] Apr 2 00:32:22.751: INFO: Created: latency-svc-cj6bl Apr 2 00:32:22.795: INFO: Got endpoints: latency-svc-cj6bl [1.235936347s] Apr 2 00:32:22.845: INFO: Created: latency-svc-s5dtz Apr 2 00:32:22.888: INFO: Got endpoints: latency-svc-s5dtz [1.250798911s] Apr 2 00:32:22.899: INFO: Created: latency-svc-dm7ns Apr 2 00:32:22.919: INFO: Got endpoints: latency-svc-dm7ns [1.203930394s] Apr 2 00:32:22.970: INFO: Created: latency-svc-kqjxp Apr 2 00:32:23.032: INFO: Got endpoints: latency-svc-kqjxp [1.239172903s] Apr 2 00:32:23.047: INFO: Created: latency-svc-9l7d6 Apr 2 00:32:23.063: INFO: Got endpoints: latency-svc-9l7d6 [1.240642664s] Apr 2 00:32:23.096: INFO: Created: latency-svc-h7rnb Apr 2 00:32:23.111: INFO: Got endpoints: latency-svc-h7rnb [1.233929409s] Apr 2 00:32:23.128: INFO: Created: latency-svc-hvpxm Apr 2 00:32:23.152: INFO: Got endpoints: latency-svc-hvpxm [1.209938421s] Apr 2 00:32:23.186: INFO: Created: latency-svc-78nqf Apr 2 00:32:23.201: INFO: Got endpoints: latency-svc-78nqf [1.203721585s] Apr 2 00:32:23.234: INFO: Created: latency-svc-fbcxd Apr 2 00:32:23.249: INFO: Got endpoints: latency-svc-fbcxd [1.156611271s] Apr 2 00:32:23.284: INFO: Created: latency-svc-hb84h Apr 2 00:32:23.296: INFO: Got endpoints: latency-svc-hb84h [981.410561ms] Apr 2 00:32:23.314: INFO: Created: latency-svc-4rkbg Apr 2 00:32:23.326: INFO: Got endpoints: latency-svc-4rkbg [910.943002ms] Apr 2 00:32:23.343: INFO: Created: latency-svc-k96kp Apr 2 00:32:23.357: INFO: Got endpoints: latency-svc-k96kp [916.671814ms] Apr 2 00:32:23.377: INFO: Created: latency-svc-wk4t2 Apr 2 00:32:23.409: INFO: Got endpoints: latency-svc-wk4t2 [927.40588ms] Apr 2 00:32:23.431: INFO: Created: latency-svc-pphc8 Apr 2 00:32:23.453: INFO: Got endpoints: latency-svc-pphc8 [846.517959ms] Apr 2 00:32:23.476: INFO: Created: latency-svc-7w6mt Apr 2 00:32:23.488: INFO: Got endpoints: latency-svc-7w6mt [813.992439ms] Apr 2 00:32:23.506: INFO: Created: latency-svc-b9f2d Apr 2 00:32:23.535: INFO: Got endpoints: latency-svc-b9f2d [739.780819ms] Apr 2 00:32:23.541: INFO: Created: latency-svc-bmr5f Apr 2 00:32:23.554: INFO: Got endpoints: latency-svc-bmr5f [666.555284ms] Apr 2 00:32:23.575: INFO: Created: latency-svc-47qlh Apr 2 00:32:23.591: INFO: Got endpoints: latency-svc-47qlh [672.067603ms] Apr 2 00:32:23.612: INFO: Created: latency-svc-qbhx5 Apr 2 00:32:23.627: INFO: Got endpoints: latency-svc-qbhx5 [594.462104ms] Apr 2 00:32:23.667: INFO: Created: latency-svc-6djt8 Apr 2 00:32:23.698: INFO: Got endpoints: latency-svc-6djt8 [634.616023ms] Apr 2 00:32:23.699: INFO: Created: latency-svc-t2mzn Apr 2 00:32:23.710: INFO: Got endpoints: latency-svc-t2mzn [599.111674ms] Apr 2 00:32:23.734: INFO: Created: latency-svc-w7rvb Apr 2 00:32:23.746: INFO: Got endpoints: latency-svc-w7rvb [593.686342ms] Apr 2 00:32:23.793: INFO: Created: latency-svc-5dcg6 Apr 2 00:32:23.815: INFO: Got endpoints: latency-svc-5dcg6 [614.343661ms] Apr 2 00:32:23.816: INFO: Created: latency-svc-cn9mv Apr 2 00:32:23.823: INFO: Got endpoints: latency-svc-cn9mv [574.262518ms] Apr 2 00:32:23.839: INFO: Created: latency-svc-vgk92 Apr 2 00:32:23.848: INFO: Got endpoints: latency-svc-vgk92 [551.235187ms] Apr 2 00:32:23.866: INFO: Created: latency-svc-8ts4j Apr 2 00:32:23.878: INFO: Got endpoints: latency-svc-8ts4j [551.660993ms] Apr 2 00:32:23.924: INFO: Created: latency-svc-5vs5k Apr 2 00:32:23.944: INFO: Got endpoints: latency-svc-5vs5k [587.198399ms] Apr 2 00:32:23.944: INFO: Created: latency-svc-xgz4g Apr 2 00:32:23.956: INFO: Got endpoints: latency-svc-xgz4g [546.126357ms] Apr 2 00:32:23.978: INFO: Created: latency-svc-x5wst Apr 2 00:32:23.991: INFO: Got endpoints: latency-svc-x5wst [538.531921ms] Apr 2 00:32:24.014: INFO: Created: latency-svc-4pvfh Apr 2 00:32:24.043: INFO: Got endpoints: latency-svc-4pvfh [555.46795ms] Apr 2 00:32:24.064: INFO: Created: latency-svc-zwv8g Apr 2 00:32:24.112: INFO: Got endpoints: latency-svc-zwv8g [576.749077ms] Apr 2 00:32:24.136: INFO: Created: latency-svc-mbvlg Apr 2 00:32:24.163: INFO: Got endpoints: latency-svc-mbvlg [608.892294ms] Apr 2 00:32:24.193: INFO: Created: latency-svc-f45cq Apr 2 00:32:24.220: INFO: Got endpoints: latency-svc-f45cq [628.617435ms] Apr 2 00:32:24.235: INFO: Created: latency-svc-srprk Apr 2 00:32:24.325: INFO: Got endpoints: latency-svc-srprk [698.538068ms] Apr 2 00:32:24.329: INFO: Created: latency-svc-pbsz5 Apr 2 00:32:24.355: INFO: Got endpoints: latency-svc-pbsz5 [657.295462ms] Apr 2 00:32:24.356: INFO: Created: latency-svc-nw7cj Apr 2 00:32:24.369: INFO: Got endpoints: latency-svc-nw7cj [658.886841ms] Apr 2 00:32:24.385: INFO: Created: latency-svc-fqhtz Apr 2 00:32:24.404: INFO: Got endpoints: latency-svc-fqhtz [658.210189ms] Apr 2 00:32:24.415: INFO: Created: latency-svc-52tns Apr 2 00:32:24.469: INFO: Got endpoints: latency-svc-52tns [653.73252ms] Apr 2 00:32:24.470: INFO: Created: latency-svc-c94l4 Apr 2 00:32:24.476: INFO: Got endpoints: latency-svc-c94l4 [652.92691ms] Apr 2 00:32:24.495: INFO: Created: latency-svc-7tzk6 Apr 2 00:32:24.513: INFO: Got endpoints: latency-svc-7tzk6 [665.216219ms] Apr 2 00:32:24.545: INFO: Created: latency-svc-tv9qh Apr 2 00:32:24.561: INFO: Got endpoints: latency-svc-tv9qh [683.067747ms] Apr 2 00:32:24.656: INFO: Created: latency-svc-bddvm Apr 2 00:32:24.668: INFO: Got endpoints: latency-svc-bddvm [724.452991ms] Apr 2 00:32:24.724: INFO: Created: latency-svc-ntw68 Apr 2 00:32:24.794: INFO: Got endpoints: latency-svc-ntw68 [838.831106ms] Apr 2 00:32:24.811: INFO: Created: latency-svc-djznx Apr 2 00:32:24.825: INFO: Got endpoints: latency-svc-djznx [833.130231ms] Apr 2 00:32:24.838: INFO: Created: latency-svc-h29mb Apr 2 00:32:24.848: INFO: Got endpoints: latency-svc-h29mb [804.609674ms] Apr 2 00:32:24.863: INFO: Created: latency-svc-ddv28 Apr 2 00:32:24.873: INFO: Got endpoints: latency-svc-ddv28 [761.414115ms] Apr 2 00:32:24.919: INFO: Created: latency-svc-2n8d6 Apr 2 00:32:24.938: INFO: Got endpoints: latency-svc-2n8d6 [775.053271ms] Apr 2 00:32:24.955: INFO: Created: latency-svc-ln64s Apr 2 00:32:24.988: INFO: Got endpoints: latency-svc-ln64s [767.950722ms] Apr 2 00:32:25.044: INFO: Created: latency-svc-2fcxp Apr 2 00:32:25.052: INFO: Got endpoints: latency-svc-2fcxp [726.345082ms] Apr 2 00:32:25.072: INFO: Created: latency-svc-jnbdw Apr 2 00:32:25.088: INFO: Got endpoints: latency-svc-jnbdw [732.497349ms] Apr 2 00:32:25.111: INFO: Created: latency-svc-f2h7b Apr 2 00:32:25.142: INFO: Got endpoints: latency-svc-f2h7b [772.911408ms] Apr 2 00:32:25.206: INFO: Created: latency-svc-lh6zk Apr 2 00:32:25.228: INFO: Created: latency-svc-tx2pn Apr 2 00:32:25.228: INFO: Got endpoints: latency-svc-lh6zk [823.742031ms] Apr 2 00:32:25.244: INFO: Got endpoints: latency-svc-tx2pn [775.028943ms] Apr 2 00:32:25.273: INFO: Created: latency-svc-6n2mf Apr 2 00:32:25.297: INFO: Got endpoints: latency-svc-6n2mf [820.93579ms] Apr 2 00:32:25.343: INFO: Created: latency-svc-8br6l Apr 2 00:32:25.351: INFO: Got endpoints: latency-svc-8br6l [838.093545ms] Apr 2 00:32:25.372: INFO: Created: latency-svc-d4rhc Apr 2 00:32:25.387: INFO: Got endpoints: latency-svc-d4rhc [826.292739ms] Apr 2 00:32:25.414: INFO: Created: latency-svc-gqppp Apr 2 00:32:25.430: INFO: Got endpoints: latency-svc-gqppp [761.091191ms] Apr 2 00:32:25.469: INFO: Created: latency-svc-7sjzt Apr 2 00:32:25.489: INFO: Got endpoints: latency-svc-7sjzt [694.91377ms] Apr 2 00:32:25.490: INFO: Created: latency-svc-kk4h4 Apr 2 00:32:25.507: INFO: Got endpoints: latency-svc-kk4h4 [682.568893ms] Apr 2 00:32:25.526: INFO: Created: latency-svc-r7qnn Apr 2 00:32:25.537: INFO: Got endpoints: latency-svc-r7qnn [689.138672ms] Apr 2 00:32:25.558: INFO: Created: latency-svc-g824m Apr 2 00:32:25.625: INFO: Got endpoints: latency-svc-g824m [751.293035ms] Apr 2 00:32:25.639: INFO: Created: latency-svc-spp4n Apr 2 00:32:25.657: INFO: Got endpoints: latency-svc-spp4n [718.164288ms] Apr 2 00:32:25.676: INFO: Created: latency-svc-wbm97 Apr 2 00:32:25.687: INFO: Got endpoints: latency-svc-wbm97 [699.139739ms] Apr 2 00:32:25.700: INFO: Created: latency-svc-mzc2z Apr 2 00:32:25.711: INFO: Got endpoints: latency-svc-mzc2z [658.793448ms] Apr 2 00:32:25.744: INFO: Created: latency-svc-qt822 Apr 2 00:32:25.768: INFO: Got endpoints: latency-svc-qt822 [680.28105ms] Apr 2 00:32:25.768: INFO: Created: latency-svc-f98b6 Apr 2 00:32:25.783: INFO: Got endpoints: latency-svc-f98b6 [641.209039ms] Apr 2 00:32:25.807: INFO: Created: latency-svc-9glv8 Apr 2 00:32:25.825: INFO: Got endpoints: latency-svc-9glv8 [596.751106ms] Apr 2 00:32:25.844: INFO: Created: latency-svc-5ssmt Apr 2 00:32:25.864: INFO: Got endpoints: latency-svc-5ssmt [619.89113ms] Apr 2 00:32:25.880: INFO: Created: latency-svc-htjtr Apr 2 00:32:25.897: INFO: Got endpoints: latency-svc-htjtr [599.820317ms] Apr 2 00:32:25.918: INFO: Created: latency-svc-gcf6w Apr 2 00:32:25.933: INFO: Got endpoints: latency-svc-gcf6w [581.497457ms] Apr 2 00:32:25.948: INFO: Created: latency-svc-vbdps Apr 2 00:32:25.957: INFO: Got endpoints: latency-svc-vbdps [569.149656ms] Apr 2 00:32:25.990: INFO: Created: latency-svc-tg9f8 Apr 2 00:32:26.011: INFO: Got endpoints: latency-svc-tg9f8 [581.842999ms] Apr 2 00:32:26.012: INFO: Created: latency-svc-6cvnf Apr 2 00:32:26.042: INFO: Got endpoints: latency-svc-6cvnf [552.599836ms] Apr 2 00:32:26.075: INFO: Created: latency-svc-qgm7s Apr 2 00:32:26.121: INFO: Got endpoints: latency-svc-qgm7s [613.890051ms] Apr 2 00:32:26.123: INFO: Created: latency-svc-2wbgg Apr 2 00:32:26.142: INFO: Got endpoints: latency-svc-2wbgg [605.124768ms] Apr 2 00:32:26.170: INFO: Created: latency-svc-4lv4z Apr 2 00:32:26.184: INFO: Got endpoints: latency-svc-4lv4z [559.490366ms] Apr 2 00:32:26.210: INFO: Created: latency-svc-nl9w6 Apr 2 00:32:26.220: INFO: Got endpoints: latency-svc-nl9w6 [563.316808ms] Apr 2 00:32:26.253: INFO: Created: latency-svc-l2d7n Apr 2 00:32:26.278: INFO: Created: latency-svc-zxkpm Apr 2 00:32:26.278: INFO: Got endpoints: latency-svc-l2d7n [591.076768ms] Apr 2 00:32:26.292: INFO: Got endpoints: latency-svc-zxkpm [581.381916ms] Apr 2 00:32:26.314: INFO: Created: latency-svc-bm689 Apr 2 00:32:26.328: INFO: Got endpoints: latency-svc-bm689 [559.775157ms] Apr 2 00:32:26.328: INFO: Latencies: [28.645178ms 75.715699ms 93.443071ms 168.856424ms 195.0819ms 232.328454ms 291.213825ms 303.742593ms 333.742768ms 431.457623ms 454.808876ms 465.583424ms 538.531921ms 546.126357ms 551.235187ms 551.660993ms 552.599836ms 555.46795ms 559.490366ms 559.775157ms 563.316808ms 569.149656ms 574.262518ms 576.749077ms 581.381916ms 581.497457ms 581.842999ms 587.198399ms 591.076768ms 593.686342ms 594.462104ms 596.751106ms 599.111674ms 599.820317ms 601.060774ms 602.996536ms 603.269314ms 605.124768ms 608.892294ms 612.052811ms 613.890051ms 614.343661ms 619.89113ms 621.152937ms 628.617435ms 629.535966ms 632.735624ms 633.368274ms 634.616023ms 640.42248ms 641.209039ms 652.842923ms 652.92691ms 653.73252ms 655.721886ms 657.295462ms 658.210189ms 658.793448ms 658.886841ms 659.714397ms 664.478439ms 664.568416ms 664.714386ms 665.216219ms 665.582601ms 666.555284ms 670.019623ms 672.067603ms 675.62743ms 677.339801ms 677.610734ms 678.567005ms 680.28105ms 682.568893ms 683.067747ms 685.734533ms 687.617728ms 688.162208ms 689.138672ms 690.419622ms 691.61652ms 691.725944ms 693.555751ms 693.706021ms 694.246784ms 694.262149ms 694.91377ms 695.131894ms 698.41872ms 698.538068ms 699.139739ms 700.917546ms 701.246802ms 706.741231ms 707.191234ms 707.351906ms 707.669387ms 709.974088ms 710.979107ms 712.970165ms 718.164288ms 719.678133ms 719.965229ms 720.061082ms 723.891343ms 724.452991ms 726.345082ms 726.772681ms 730.11824ms 730.965991ms 731.408398ms 731.433274ms 731.444888ms 732.335142ms 732.497349ms 736.615431ms 738.678819ms 739.780819ms 744.425777ms 751.293035ms 754.120643ms 754.780399ms 761.091191ms 761.414115ms 763.979531ms 765.148814ms 767.052634ms 767.950722ms 768.524056ms 772.911408ms 775.028943ms 775.053271ms 778.682388ms 782.251514ms 783.723517ms 784.763601ms 785.196506ms 787.313654ms 788.477335ms 804.609674ms 808.331952ms 809.068798ms 809.282486ms 813.518191ms 813.992439ms 818.15243ms 820.93579ms 823.742031ms 826.292739ms 826.888148ms 827.228467ms 833.130231ms 838.093545ms 838.401055ms 838.831106ms 846.517959ms 874.982119ms 910.943002ms 916.671814ms 927.40588ms 981.410561ms 1.006221288s 1.017119817s 1.031929665s 1.154138463s 1.156611271s 1.168070546s 1.183542643s 1.198563538s 1.203721585s 1.203930394s 1.209566321s 1.209938421s 1.210385949s 1.21535462s 1.233929409s 1.235936347s 1.239172903s 1.240642664s 1.246126263s 1.250798911s 1.257648594s 1.302399682s 1.40267153s 1.454667119s 1.593357867s 1.772284089s 1.808314234s 1.898110772s 1.9053522s 1.92003003s 1.924668039s 1.981067128s 2.030091036s 2.049859463s 2.086209509s 2.113548109s 2.138412577s 2.211633477s 2.264812282s] Apr 2 00:32:26.328: INFO: 50 %ile: 718.164288ms Apr 2 00:32:26.328: INFO: 90 %ile: 1.250798911s Apr 2 00:32:26.328: INFO: 99 %ile: 2.211633477s Apr 2 00:32:26.328: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:32:26.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-6744" for this suite. • [SLOW TEST:14.696 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":275,"completed":174,"skipped":2949,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:32:26.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service nodeport-service with the type=NodePort in namespace services-8240 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-8240 STEP: creating replication controller externalsvc in namespace services-8240 I0402 00:32:26.508704 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-8240, replica count: 2 I0402 00:32:29.559175 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0402 00:32:32.559381 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Apr 2 00:32:32.670: INFO: Creating new exec pod Apr 2 00:32:36.700: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-8240 execpodtfxcc -- /bin/sh -x -c nslookup nodeport-service' Apr 2 00:32:36.915: INFO: stderr: "I0402 00:32:36.835788 2385 log.go:172] (0xc000a780b0) (0xc0004d6b40) Create stream\nI0402 00:32:36.835836 2385 log.go:172] (0xc000a780b0) (0xc0004d6b40) Stream added, broadcasting: 1\nI0402 00:32:36.838118 2385 log.go:172] (0xc000a780b0) Reply frame received for 1\nI0402 00:32:36.838159 2385 log.go:172] (0xc000a780b0) (0xc000a6e000) Create stream\nI0402 00:32:36.838171 2385 log.go:172] (0xc000a780b0) (0xc000a6e000) Stream added, broadcasting: 3\nI0402 00:32:36.838890 2385 log.go:172] (0xc000a780b0) Reply frame received for 3\nI0402 00:32:36.838924 2385 log.go:172] (0xc000a780b0) (0xc0006bf2c0) Create stream\nI0402 00:32:36.838937 2385 log.go:172] (0xc000a780b0) (0xc0006bf2c0) Stream added, broadcasting: 5\nI0402 00:32:36.839649 2385 log.go:172] (0xc000a780b0) Reply frame received for 5\nI0402 00:32:36.902824 2385 log.go:172] (0xc000a780b0) Data frame received for 5\nI0402 00:32:36.902850 2385 log.go:172] (0xc0006bf2c0) (5) Data frame handling\nI0402 00:32:36.902869 2385 log.go:172] (0xc0006bf2c0) (5) Data frame sent\n+ nslookup nodeport-service\nI0402 00:32:36.908106 2385 log.go:172] (0xc000a780b0) Data frame received for 3\nI0402 00:32:36.908127 2385 log.go:172] (0xc000a6e000) (3) Data frame handling\nI0402 00:32:36.908142 2385 log.go:172] (0xc000a6e000) (3) Data frame sent\nI0402 00:32:36.908691 2385 log.go:172] (0xc000a780b0) Data frame received for 3\nI0402 00:32:36.908706 2385 log.go:172] (0xc000a6e000) (3) Data frame handling\nI0402 00:32:36.908716 2385 log.go:172] (0xc000a6e000) (3) Data frame sent\nI0402 00:32:36.909076 2385 log.go:172] (0xc000a780b0) Data frame received for 3\nI0402 00:32:36.909100 2385 log.go:172] (0xc000a6e000) (3) Data frame handling\nI0402 00:32:36.909271 2385 log.go:172] (0xc000a780b0) Data frame received for 5\nI0402 00:32:36.909289 2385 log.go:172] (0xc0006bf2c0) (5) Data frame handling\nI0402 00:32:36.910764 2385 log.go:172] (0xc000a780b0) Data frame received for 1\nI0402 00:32:36.910786 2385 log.go:172] (0xc0004d6b40) (1) Data frame handling\nI0402 00:32:36.910819 2385 log.go:172] (0xc0004d6b40) (1) Data frame sent\nI0402 00:32:36.910838 2385 log.go:172] (0xc000a780b0) (0xc0004d6b40) Stream removed, broadcasting: 1\nI0402 00:32:36.910865 2385 log.go:172] (0xc000a780b0) Go away received\nI0402 00:32:36.911147 2385 log.go:172] (0xc000a780b0) (0xc0004d6b40) Stream removed, broadcasting: 1\nI0402 00:32:36.911160 2385 log.go:172] (0xc000a780b0) (0xc000a6e000) Stream removed, broadcasting: 3\nI0402 00:32:36.911168 2385 log.go:172] (0xc000a780b0) (0xc0006bf2c0) Stream removed, broadcasting: 5\n" Apr 2 00:32:36.915: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-8240.svc.cluster.local\tcanonical name = externalsvc.services-8240.svc.cluster.local.\nName:\texternalsvc.services-8240.svc.cluster.local\nAddress: 10.96.153.117\n\n" STEP: deleting ReplicationController externalsvc in namespace services-8240, will wait for the garbage collector to delete the pods Apr 2 00:32:37.010: INFO: Deleting ReplicationController externalsvc took: 17.539737ms Apr 2 00:32:37.410: INFO: Terminating ReplicationController externalsvc pods took: 400.254759ms Apr 2 00:32:53.035: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:32:53.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8240" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:26.707 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":175,"skipped":2966,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:32:53.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Apr 2 00:32:59.189: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-97 PodName:pod-sharedvolume-a4f3a777-d362-4bdb-b4be-e448a4cbbeac ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 00:32:59.189: INFO: >>> kubeConfig: /root/.kube/config I0402 00:32:59.221282 7 log.go:172] (0xc002a491e0) (0xc001a90dc0) Create stream I0402 00:32:59.221325 7 log.go:172] (0xc002a491e0) (0xc001a90dc0) Stream added, broadcasting: 1 I0402 00:32:59.222784 7 log.go:172] (0xc002a491e0) Reply frame received for 1 I0402 00:32:59.222822 7 log.go:172] (0xc002a491e0) (0xc0010f8960) Create stream I0402 00:32:59.222830 7 log.go:172] (0xc002a491e0) (0xc0010f8960) Stream added, broadcasting: 3 I0402 00:32:59.223672 7 log.go:172] (0xc002a491e0) Reply frame received for 3 I0402 00:32:59.223701 7 log.go:172] (0xc002a491e0) (0xc001a90e60) Create stream I0402 00:32:59.223712 7 log.go:172] (0xc002a491e0) (0xc001a90e60) Stream added, broadcasting: 5 I0402 00:32:59.224555 7 log.go:172] (0xc002a491e0) Reply frame received for 5 I0402 00:32:59.300495 7 log.go:172] (0xc002a491e0) Data frame received for 5 I0402 00:32:59.300518 7 log.go:172] (0xc001a90e60) (5) Data frame handling I0402 00:32:59.300562 7 log.go:172] (0xc002a491e0) Data frame received for 3 I0402 00:32:59.300601 7 log.go:172] (0xc0010f8960) (3) Data frame handling I0402 00:32:59.300622 7 log.go:172] (0xc0010f8960) (3) Data frame sent I0402 00:32:59.300635 7 log.go:172] (0xc002a491e0) Data frame received for 3 I0402 00:32:59.300651 7 log.go:172] (0xc0010f8960) (3) Data frame handling I0402 00:32:59.302241 7 log.go:172] (0xc002a491e0) Data frame received for 1 I0402 00:32:59.302270 7 log.go:172] (0xc001a90dc0) (1) Data frame handling I0402 00:32:59.302304 7 log.go:172] (0xc001a90dc0) (1) Data frame sent I0402 00:32:59.302331 7 log.go:172] (0xc002a491e0) (0xc001a90dc0) Stream removed, broadcasting: 1 I0402 00:32:59.302367 7 log.go:172] (0xc002a491e0) Go away received I0402 00:32:59.302458 7 log.go:172] (0xc002a491e0) (0xc001a90dc0) Stream removed, broadcasting: 1 I0402 00:32:59.302484 7 log.go:172] (0xc002a491e0) (0xc0010f8960) Stream removed, broadcasting: 3 I0402 00:32:59.302505 7 log.go:172] (0xc002a491e0) (0xc001a90e60) Stream removed, broadcasting: 5 Apr 2 00:32:59.302: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:32:59.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-97" for this suite. • [SLOW TEST:6.223 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":176,"skipped":3012,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:32:59.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-cf6fa965-a127-423c-b555-3b5396c513f2 STEP: Creating a pod to test consume configMaps Apr 2 00:32:59.421: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-717aa15e-acfc-4750-b716-14a475e1d61a" in namespace "projected-629" to be "Succeeded or Failed" Apr 2 00:32:59.455: INFO: Pod "pod-projected-configmaps-717aa15e-acfc-4750-b716-14a475e1d61a": Phase="Pending", Reason="", readiness=false. Elapsed: 33.886744ms Apr 2 00:33:01.460: INFO: Pod "pod-projected-configmaps-717aa15e-acfc-4750-b716-14a475e1d61a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038247624s Apr 2 00:33:03.464: INFO: Pod "pod-projected-configmaps-717aa15e-acfc-4750-b716-14a475e1d61a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04277114s STEP: Saw pod success Apr 2 00:33:03.464: INFO: Pod "pod-projected-configmaps-717aa15e-acfc-4750-b716-14a475e1d61a" satisfied condition "Succeeded or Failed" Apr 2 00:33:03.467: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-717aa15e-acfc-4750-b716-14a475e1d61a container projected-configmap-volume-test: STEP: delete the pod Apr 2 00:33:03.529: INFO: Waiting for pod pod-projected-configmaps-717aa15e-acfc-4750-b716-14a475e1d61a to disappear Apr 2 00:33:03.545: INFO: Pod pod-projected-configmaps-717aa15e-acfc-4750-b716-14a475e1d61a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:33:03.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-629" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":177,"skipped":3012,"failed":0} ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:33:03.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:33:03.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-7168" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":178,"skipped":3012,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:33:03.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 00:33:03.780: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:33:10.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7944" for this suite. • [SLOW TEST:6.368 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":275,"completed":179,"skipped":3021,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:33:10.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 2 00:33:10.123: INFO: Waiting up to 5m0s for pod "pod-3b5a772d-96de-4b11-9cc8-fedcd8472b9f" in namespace "emptydir-2304" to be "Succeeded or Failed" Apr 2 00:33:10.136: INFO: Pod "pod-3b5a772d-96de-4b11-9cc8-fedcd8472b9f": Phase="Pending", Reason="", readiness=false. Elapsed: 13.362309ms Apr 2 00:33:12.141: INFO: Pod "pod-3b5a772d-96de-4b11-9cc8-fedcd8472b9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017808692s Apr 2 00:33:14.145: INFO: Pod "pod-3b5a772d-96de-4b11-9cc8-fedcd8472b9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022243046s STEP: Saw pod success Apr 2 00:33:14.145: INFO: Pod "pod-3b5a772d-96de-4b11-9cc8-fedcd8472b9f" satisfied condition "Succeeded or Failed" Apr 2 00:33:14.149: INFO: Trying to get logs from node latest-worker pod pod-3b5a772d-96de-4b11-9cc8-fedcd8472b9f container test-container: STEP: delete the pod Apr 2 00:33:14.185: INFO: Waiting for pod pod-3b5a772d-96de-4b11-9cc8-fedcd8472b9f to disappear Apr 2 00:33:14.199: INFO: Pod pod-3b5a772d-96de-4b11-9cc8-fedcd8472b9f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:33:14.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2304" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":180,"skipped":3042,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:33:14.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 2 00:33:14.274: INFO: Waiting up to 5m0s for pod "downward-api-60489c0c-940f-47e1-b76d-e69aaf889611" in namespace "downward-api-9729" to be "Succeeded or Failed" Apr 2 00:33:14.283: INFO: Pod "downward-api-60489c0c-940f-47e1-b76d-e69aaf889611": Phase="Pending", Reason="", readiness=false. Elapsed: 8.168738ms Apr 2 00:33:16.302: INFO: Pod "downward-api-60489c0c-940f-47e1-b76d-e69aaf889611": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027531403s Apr 2 00:33:18.306: INFO: Pod "downward-api-60489c0c-940f-47e1-b76d-e69aaf889611": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031728617s STEP: Saw pod success Apr 2 00:33:18.306: INFO: Pod "downward-api-60489c0c-940f-47e1-b76d-e69aaf889611" satisfied condition "Succeeded or Failed" Apr 2 00:33:18.309: INFO: Trying to get logs from node latest-worker2 pod downward-api-60489c0c-940f-47e1-b76d-e69aaf889611 container dapi-container: STEP: delete the pod Apr 2 00:33:18.347: INFO: Waiting for pod downward-api-60489c0c-940f-47e1-b76d-e69aaf889611 to disappear Apr 2 00:33:18.359: INFO: Pod downward-api-60489c0c-940f-47e1-b76d-e69aaf889611 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:33:18.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9729" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":181,"skipped":3069,"failed":0} S ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:33:18.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with configMap that has name projected-configmap-test-upd-29a8d651-644f-4df0-88ca-524af04cc9fc STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-29a8d651-644f-4df0-88ca-524af04cc9fc STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:34:34.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9131" for this suite. • [SLOW TEST:76.441 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":182,"skipped":3070,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:34:34.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 2 00:34:34.889: INFO: Waiting up to 5m0s for pod "pod-e5c85a42-85a9-4f70-96ff-b27e29c3c2c3" in namespace "emptydir-305" to be "Succeeded or Failed" Apr 2 00:34:34.893: INFO: Pod "pod-e5c85a42-85a9-4f70-96ff-b27e29c3c2c3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.365229ms Apr 2 00:34:36.905: INFO: Pod "pod-e5c85a42-85a9-4f70-96ff-b27e29c3c2c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016468391s Apr 2 00:34:38.909: INFO: Pod "pod-e5c85a42-85a9-4f70-96ff-b27e29c3c2c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020350066s STEP: Saw pod success Apr 2 00:34:38.909: INFO: Pod "pod-e5c85a42-85a9-4f70-96ff-b27e29c3c2c3" satisfied condition "Succeeded or Failed" Apr 2 00:34:38.912: INFO: Trying to get logs from node latest-worker2 pod pod-e5c85a42-85a9-4f70-96ff-b27e29c3c2c3 container test-container: STEP: delete the pod Apr 2 00:34:38.930: INFO: Waiting for pod pod-e5c85a42-85a9-4f70-96ff-b27e29c3c2c3 to disappear Apr 2 00:34:38.934: INFO: Pod pod-e5c85a42-85a9-4f70-96ff-b27e29c3c2c3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:34:38.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-305" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":183,"skipped":3076,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:34:38.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:34:39.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6092" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":184,"skipped":3084,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:34:39.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 00:34:39.226: INFO: (0) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 6.333034ms) Apr 2 00:34:39.228: INFO: (1) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.69108ms) Apr 2 00:34:39.231: INFO: (2) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.480862ms) Apr 2 00:34:39.233: INFO: (3) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.180768ms) Apr 2 00:34:39.235: INFO: (4) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.061226ms) Apr 2 00:34:39.238: INFO: (5) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.371026ms) Apr 2 00:34:39.240: INFO: (6) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.479955ms) Apr 2 00:34:39.243: INFO: (7) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.495155ms) Apr 2 00:34:39.245: INFO: (8) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.693552ms) Apr 2 00:34:39.248: INFO: (9) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.69952ms) Apr 2 00:34:39.251: INFO: (10) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.455544ms) Apr 2 00:34:39.253: INFO: (11) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.724803ms) Apr 2 00:34:39.256: INFO: (12) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.089116ms) Apr 2 00:34:39.259: INFO: (13) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.003933ms) Apr 2 00:34:39.263: INFO: (14) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.015202ms) Apr 2 00:34:39.286: INFO: (15) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 23.007069ms) Apr 2 00:34:39.289: INFO: (16) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.506242ms) Apr 2 00:34:39.292: INFO: (17) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.132274ms) Apr 2 00:34:39.296: INFO: (18) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.596775ms) Apr 2 00:34:39.300: INFO: (19) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.583527ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:34:39.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-833" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":275,"completed":185,"skipped":3103,"failed":0} SSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:34:39.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 00:34:39.371: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-fce15201-a450-4843-adee-b5f5a5937fb1" in namespace "security-context-test-1692" to be "Succeeded or Failed" Apr 2 00:34:39.374: INFO: Pod "busybox-readonly-false-fce15201-a450-4843-adee-b5f5a5937fb1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.647284ms Apr 2 00:34:41.378: INFO: Pod "busybox-readonly-false-fce15201-a450-4843-adee-b5f5a5937fb1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006695949s Apr 2 00:34:43.382: INFO: Pod "busybox-readonly-false-fce15201-a450-4843-adee-b5f5a5937fb1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011223799s Apr 2 00:34:43.382: INFO: Pod "busybox-readonly-false-fce15201-a450-4843-adee-b5f5a5937fb1" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:34:43.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1692" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":186,"skipped":3106,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:34:43.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating api versions Apr 2 00:34:43.440: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config api-versions' Apr 2 00:34:43.637: INFO: stderr: "" Apr 2 00:34:43.637: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:34:43.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8348" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":275,"completed":187,"skipped":3121,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:34:43.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 00:34:43.735: INFO: Creating ReplicaSet my-hostname-basic-61de3e40-5074-44e8-ae24-25f432d68ae5 Apr 2 00:34:43.751: INFO: Pod name my-hostname-basic-61de3e40-5074-44e8-ae24-25f432d68ae5: Found 0 pods out of 1 Apr 2 00:34:48.754: INFO: Pod name my-hostname-basic-61de3e40-5074-44e8-ae24-25f432d68ae5: Found 1 pods out of 1 Apr 2 00:34:48.754: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-61de3e40-5074-44e8-ae24-25f432d68ae5" is running Apr 2 00:34:48.756: INFO: Pod "my-hostname-basic-61de3e40-5074-44e8-ae24-25f432d68ae5-96l84" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-02 00:34:43 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-02 00:34:46 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-02 00:34:46 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-02 00:34:43 +0000 UTC Reason: Message:}]) Apr 2 00:34:48.756: INFO: Trying to dial the pod Apr 2 00:34:53.768: INFO: Controller my-hostname-basic-61de3e40-5074-44e8-ae24-25f432d68ae5: Got expected result from replica 1 [my-hostname-basic-61de3e40-5074-44e8-ae24-25f432d68ae5-96l84]: "my-hostname-basic-61de3e40-5074-44e8-ae24-25f432d68ae5-96l84", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:34:53.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6329" for this suite. • [SLOW TEST:10.123 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":188,"skipped":3131,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:34:53.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 2 00:34:53.816: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:35:01.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9372" for this suite. • [SLOW TEST:7.514 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":189,"skipped":3153,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:35:01.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 2 00:35:11.376: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1803 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 00:35:11.376: INFO: >>> kubeConfig: /root/.kube/config I0402 00:35:11.414378 7 log.go:172] (0xc0037e4370) (0xc000b66960) Create stream I0402 00:35:11.414408 7 log.go:172] (0xc0037e4370) (0xc000b66960) Stream added, broadcasting: 1 I0402 00:35:11.416425 7 log.go:172] (0xc0037e4370) Reply frame received for 1 I0402 00:35:11.416486 7 log.go:172] (0xc0037e4370) (0xc00124a1e0) Create stream I0402 00:35:11.416506 7 log.go:172] (0xc0037e4370) (0xc00124a1e0) Stream added, broadcasting: 3 I0402 00:35:11.417619 7 log.go:172] (0xc0037e4370) Reply frame received for 3 I0402 00:35:11.417697 7 log.go:172] (0xc0037e4370) (0xc000b75900) Create stream I0402 00:35:11.417713 7 log.go:172] (0xc0037e4370) (0xc000b75900) Stream added, broadcasting: 5 I0402 00:35:11.418588 7 log.go:172] (0xc0037e4370) Reply frame received for 5 I0402 00:35:11.487123 7 log.go:172] (0xc0037e4370) Data frame received for 5 I0402 00:35:11.487147 7 log.go:172] (0xc000b75900) (5) Data frame handling I0402 00:35:11.487187 7 log.go:172] (0xc0037e4370) Data frame received for 3 I0402 00:35:11.487238 7 log.go:172] (0xc00124a1e0) (3) Data frame handling I0402 00:35:11.487264 7 log.go:172] (0xc00124a1e0) (3) Data frame sent I0402 00:35:11.487280 7 log.go:172] (0xc0037e4370) Data frame received for 3 I0402 00:35:11.487301 7 log.go:172] (0xc00124a1e0) (3) Data frame handling I0402 00:35:11.488736 7 log.go:172] (0xc0037e4370) Data frame received for 1 I0402 00:35:11.488781 7 log.go:172] (0xc000b66960) (1) Data frame handling I0402 00:35:11.488806 7 log.go:172] (0xc000b66960) (1) Data frame sent I0402 00:35:11.488834 7 log.go:172] (0xc0037e4370) (0xc000b66960) Stream removed, broadcasting: 1 I0402 00:35:11.488881 7 log.go:172] (0xc0037e4370) Go away received I0402 00:35:11.488969 7 log.go:172] (0xc0037e4370) (0xc000b66960) Stream removed, broadcasting: 1 I0402 00:35:11.488993 7 log.go:172] (0xc0037e4370) (0xc00124a1e0) Stream removed, broadcasting: 3 I0402 00:35:11.489008 7 log.go:172] (0xc0037e4370) (0xc000b75900) Stream removed, broadcasting: 5 Apr 2 00:35:11.489: INFO: Exec stderr: "" Apr 2 00:35:11.489: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1803 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 00:35:11.489: INFO: >>> kubeConfig: /root/.kube/config I0402 00:35:11.515270 7 log.go:172] (0xc002a49340) (0xc00124a6e0) Create stream I0402 00:35:11.515303 7 log.go:172] (0xc002a49340) (0xc00124a6e0) Stream added, broadcasting: 1 I0402 00:35:11.517457 7 log.go:172] (0xc002a49340) Reply frame received for 1 I0402 00:35:11.517501 7 log.go:172] (0xc002a49340) (0xc000e0fd60) Create stream I0402 00:35:11.517516 7 log.go:172] (0xc002a49340) (0xc000e0fd60) Stream added, broadcasting: 3 I0402 00:35:11.518526 7 log.go:172] (0xc002a49340) Reply frame received for 3 I0402 00:35:11.518572 7 log.go:172] (0xc002a49340) (0xc000e0ff40) Create stream I0402 00:35:11.518588 7 log.go:172] (0xc002a49340) (0xc000e0ff40) Stream added, broadcasting: 5 I0402 00:35:11.519493 7 log.go:172] (0xc002a49340) Reply frame received for 5 I0402 00:35:11.579601 7 log.go:172] (0xc002a49340) Data frame received for 3 I0402 00:35:11.579633 7 log.go:172] (0xc000e0fd60) (3) Data frame handling I0402 00:35:11.579650 7 log.go:172] (0xc000e0fd60) (3) Data frame sent I0402 00:35:11.579672 7 log.go:172] (0xc002a49340) Data frame received for 5 I0402 00:35:11.579712 7 log.go:172] (0xc000e0ff40) (5) Data frame handling I0402 00:35:11.579748 7 log.go:172] (0xc002a49340) Data frame received for 3 I0402 00:35:11.579765 7 log.go:172] (0xc000e0fd60) (3) Data frame handling I0402 00:35:11.581840 7 log.go:172] (0xc002a49340) Data frame received for 1 I0402 00:35:11.581868 7 log.go:172] (0xc00124a6e0) (1) Data frame handling I0402 00:35:11.581890 7 log.go:172] (0xc00124a6e0) (1) Data frame sent I0402 00:35:11.582051 7 log.go:172] (0xc002a49340) (0xc00124a6e0) Stream removed, broadcasting: 1 I0402 00:35:11.582185 7 log.go:172] (0xc002a49340) (0xc00124a6e0) Stream removed, broadcasting: 1 I0402 00:35:11.582200 7 log.go:172] (0xc002a49340) (0xc000e0fd60) Stream removed, broadcasting: 3 I0402 00:35:11.582349 7 log.go:172] (0xc002a49340) Go away received I0402 00:35:11.582413 7 log.go:172] (0xc002a49340) (0xc000e0ff40) Stream removed, broadcasting: 5 Apr 2 00:35:11.582: INFO: Exec stderr: "" Apr 2 00:35:11.582: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1803 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 00:35:11.582: INFO: >>> kubeConfig: /root/.kube/config I0402 00:35:11.617710 7 log.go:172] (0xc006208630) (0xc001172640) Create stream I0402 00:35:11.617739 7 log.go:172] (0xc006208630) (0xc001172640) Stream added, broadcasting: 1 I0402 00:35:11.620410 7 log.go:172] (0xc006208630) Reply frame received for 1 I0402 00:35:11.620477 7 log.go:172] (0xc006208630) (0xc00124ac80) Create stream I0402 00:35:11.620494 7 log.go:172] (0xc006208630) (0xc00124ac80) Stream added, broadcasting: 3 I0402 00:35:11.621665 7 log.go:172] (0xc006208630) Reply frame received for 3 I0402 00:35:11.621870 7 log.go:172] (0xc006208630) (0xc000b66be0) Create stream I0402 00:35:11.621883 7 log.go:172] (0xc006208630) (0xc000b66be0) Stream added, broadcasting: 5 I0402 00:35:11.622762 7 log.go:172] (0xc006208630) Reply frame received for 5 I0402 00:35:11.693456 7 log.go:172] (0xc006208630) Data frame received for 5 I0402 00:35:11.693496 7 log.go:172] (0xc000b66be0) (5) Data frame handling I0402 00:35:11.693527 7 log.go:172] (0xc006208630) Data frame received for 3 I0402 00:35:11.693544 7 log.go:172] (0xc00124ac80) (3) Data frame handling I0402 00:35:11.693558 7 log.go:172] (0xc00124ac80) (3) Data frame sent I0402 00:35:11.693568 7 log.go:172] (0xc006208630) Data frame received for 3 I0402 00:35:11.693578 7 log.go:172] (0xc00124ac80) (3) Data frame handling I0402 00:35:11.695114 7 log.go:172] (0xc006208630) Data frame received for 1 I0402 00:35:11.695136 7 log.go:172] (0xc001172640) (1) Data frame handling I0402 00:35:11.695147 7 log.go:172] (0xc001172640) (1) Data frame sent I0402 00:35:11.695166 7 log.go:172] (0xc006208630) (0xc001172640) Stream removed, broadcasting: 1 I0402 00:35:11.695182 7 log.go:172] (0xc006208630) Go away received I0402 00:35:11.695294 7 log.go:172] (0xc006208630) (0xc001172640) Stream removed, broadcasting: 1 I0402 00:35:11.695315 7 log.go:172] (0xc006208630) (0xc00124ac80) Stream removed, broadcasting: 3 I0402 00:35:11.695326 7 log.go:172] (0xc006208630) (0xc000b66be0) Stream removed, broadcasting: 5 Apr 2 00:35:11.695: INFO: Exec stderr: "" Apr 2 00:35:11.695: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1803 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 00:35:11.695: INFO: >>> kubeConfig: /root/.kube/config I0402 00:35:11.727249 7 log.go:172] (0xc006208c60) (0xc001172c80) Create stream I0402 00:35:11.727278 7 log.go:172] (0xc006208c60) (0xc001172c80) Stream added, broadcasting: 1 I0402 00:35:11.734269 7 log.go:172] (0xc006208c60) Reply frame received for 1 I0402 00:35:11.734306 7 log.go:172] (0xc006208c60) (0xc0009a65a0) Create stream I0402 00:35:11.734324 7 log.go:172] (0xc006208c60) (0xc0009a65a0) Stream added, broadcasting: 3 I0402 00:35:11.735469 7 log.go:172] (0xc006208c60) Reply frame received for 3 I0402 00:35:11.735494 7 log.go:172] (0xc006208c60) (0xc001172dc0) Create stream I0402 00:35:11.735506 7 log.go:172] (0xc006208c60) (0xc001172dc0) Stream added, broadcasting: 5 I0402 00:35:11.737645 7 log.go:172] (0xc006208c60) Reply frame received for 5 I0402 00:35:11.793685 7 log.go:172] (0xc006208c60) Data frame received for 5 I0402 00:35:11.793741 7 log.go:172] (0xc001172dc0) (5) Data frame handling I0402 00:35:11.793773 7 log.go:172] (0xc006208c60) Data frame received for 3 I0402 00:35:11.793794 7 log.go:172] (0xc0009a65a0) (3) Data frame handling I0402 00:35:11.793814 7 log.go:172] (0xc0009a65a0) (3) Data frame sent I0402 00:35:11.793827 7 log.go:172] (0xc006208c60) Data frame received for 3 I0402 00:35:11.793840 7 log.go:172] (0xc0009a65a0) (3) Data frame handling I0402 00:35:11.795600 7 log.go:172] (0xc006208c60) Data frame received for 1 I0402 00:35:11.795624 7 log.go:172] (0xc001172c80) (1) Data frame handling I0402 00:35:11.795645 7 log.go:172] (0xc001172c80) (1) Data frame sent I0402 00:35:11.795666 7 log.go:172] (0xc006208c60) (0xc001172c80) Stream removed, broadcasting: 1 I0402 00:35:11.795688 7 log.go:172] (0xc006208c60) Go away received I0402 00:35:11.795753 7 log.go:172] (0xc006208c60) (0xc001172c80) Stream removed, broadcasting: 1 I0402 00:35:11.795788 7 log.go:172] (0xc006208c60) (0xc0009a65a0) Stream removed, broadcasting: 3 I0402 00:35:11.795799 7 log.go:172] (0xc006208c60) (0xc001172dc0) Stream removed, broadcasting: 5 Apr 2 00:35:11.795: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 2 00:35:11.795: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1803 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 00:35:11.795: INFO: >>> kubeConfig: /root/.kube/config I0402 00:35:11.823228 7 log.go:172] (0xc002a49970) (0xc00124b400) Create stream I0402 00:35:11.823254 7 log.go:172] (0xc002a49970) (0xc00124b400) Stream added, broadcasting: 1 I0402 00:35:11.825522 7 log.go:172] (0xc002a49970) Reply frame received for 1 I0402 00:35:11.825554 7 log.go:172] (0xc002a49970) (0xc000b66fa0) Create stream I0402 00:35:11.825563 7 log.go:172] (0xc002a49970) (0xc000b66fa0) Stream added, broadcasting: 3 I0402 00:35:11.826377 7 log.go:172] (0xc002a49970) Reply frame received for 3 I0402 00:35:11.826403 7 log.go:172] (0xc002a49970) (0xc0009a6820) Create stream I0402 00:35:11.826416 7 log.go:172] (0xc002a49970) (0xc0009a6820) Stream added, broadcasting: 5 I0402 00:35:11.827069 7 log.go:172] (0xc002a49970) Reply frame received for 5 I0402 00:35:11.885510 7 log.go:172] (0xc002a49970) Data frame received for 5 I0402 00:35:11.885557 7 log.go:172] (0xc0009a6820) (5) Data frame handling I0402 00:35:11.885579 7 log.go:172] (0xc002a49970) Data frame received for 3 I0402 00:35:11.885591 7 log.go:172] (0xc000b66fa0) (3) Data frame handling I0402 00:35:11.885602 7 log.go:172] (0xc000b66fa0) (3) Data frame sent I0402 00:35:11.885615 7 log.go:172] (0xc002a49970) Data frame received for 3 I0402 00:35:11.885628 7 log.go:172] (0xc000b66fa0) (3) Data frame handling I0402 00:35:11.887039 7 log.go:172] (0xc002a49970) Data frame received for 1 I0402 00:35:11.887064 7 log.go:172] (0xc00124b400) (1) Data frame handling I0402 00:35:11.887095 7 log.go:172] (0xc00124b400) (1) Data frame sent I0402 00:35:11.887120 7 log.go:172] (0xc002a49970) (0xc00124b400) Stream removed, broadcasting: 1 I0402 00:35:11.887188 7 log.go:172] (0xc002a49970) Go away received I0402 00:35:11.887220 7 log.go:172] (0xc002a49970) (0xc00124b400) Stream removed, broadcasting: 1 I0402 00:35:11.887247 7 log.go:172] (0xc002a49970) (0xc000b66fa0) Stream removed, broadcasting: 3 I0402 00:35:11.887259 7 log.go:172] (0xc002a49970) (0xc0009a6820) Stream removed, broadcasting: 5 Apr 2 00:35:11.887: INFO: Exec stderr: "" Apr 2 00:35:11.887: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1803 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 00:35:11.887: INFO: >>> kubeConfig: /root/.kube/config I0402 00:35:11.912381 7 log.go:172] (0xc0037e49a0) (0xc000b679a0) Create stream I0402 00:35:11.912406 7 log.go:172] (0xc0037e49a0) (0xc000b679a0) Stream added, broadcasting: 1 I0402 00:35:11.914281 7 log.go:172] (0xc0037e49a0) Reply frame received for 1 I0402 00:35:11.914318 7 log.go:172] (0xc0037e49a0) (0xc0009a68c0) Create stream I0402 00:35:11.914326 7 log.go:172] (0xc0037e49a0) (0xc0009a68c0) Stream added, broadcasting: 3 I0402 00:35:11.915172 7 log.go:172] (0xc0037e49a0) Reply frame received for 3 I0402 00:35:11.915196 7 log.go:172] (0xc0037e49a0) (0xc0009a6d20) Create stream I0402 00:35:11.915202 7 log.go:172] (0xc0037e49a0) (0xc0009a6d20) Stream added, broadcasting: 5 I0402 00:35:11.916110 7 log.go:172] (0xc0037e49a0) Reply frame received for 5 I0402 00:35:11.986605 7 log.go:172] (0xc0037e49a0) Data frame received for 5 I0402 00:35:11.986637 7 log.go:172] (0xc0009a6d20) (5) Data frame handling I0402 00:35:11.986668 7 log.go:172] (0xc0037e49a0) Data frame received for 3 I0402 00:35:11.986712 7 log.go:172] (0xc0009a68c0) (3) Data frame handling I0402 00:35:11.986744 7 log.go:172] (0xc0009a68c0) (3) Data frame sent I0402 00:35:11.986779 7 log.go:172] (0xc0037e49a0) Data frame received for 3 I0402 00:35:11.986803 7 log.go:172] (0xc0009a68c0) (3) Data frame handling I0402 00:35:11.987859 7 log.go:172] (0xc0037e49a0) Data frame received for 1 I0402 00:35:11.987881 7 log.go:172] (0xc000b679a0) (1) Data frame handling I0402 00:35:11.987911 7 log.go:172] (0xc000b679a0) (1) Data frame sent I0402 00:35:11.987944 7 log.go:172] (0xc0037e49a0) (0xc000b679a0) Stream removed, broadcasting: 1 I0402 00:35:11.987968 7 log.go:172] (0xc0037e49a0) Go away received I0402 00:35:11.988215 7 log.go:172] (0xc0037e49a0) (0xc000b679a0) Stream removed, broadcasting: 1 I0402 00:35:11.988236 7 log.go:172] (0xc0037e49a0) (0xc0009a68c0) Stream removed, broadcasting: 3 I0402 00:35:11.988247 7 log.go:172] (0xc0037e49a0) (0xc0009a6d20) Stream removed, broadcasting: 5 Apr 2 00:35:11.988: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 2 00:35:11.988: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1803 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 00:35:11.988: INFO: >>> kubeConfig: /root/.kube/config I0402 00:35:12.019328 7 log.go:172] (0xc00261c0b0) (0xc00124bcc0) Create stream I0402 00:35:12.019367 7 log.go:172] (0xc00261c0b0) (0xc00124bcc0) Stream added, broadcasting: 1 I0402 00:35:12.021277 7 log.go:172] (0xc00261c0b0) Reply frame received for 1 I0402 00:35:12.021304 7 log.go:172] (0xc00261c0b0) (0xc000b67d60) Create stream I0402 00:35:12.021312 7 log.go:172] (0xc00261c0b0) (0xc000b67d60) Stream added, broadcasting: 3 I0402 00:35:12.022412 7 log.go:172] (0xc00261c0b0) Reply frame received for 3 I0402 00:35:12.022438 7 log.go:172] (0xc00261c0b0) (0xc001173180) Create stream I0402 00:35:12.022447 7 log.go:172] (0xc00261c0b0) (0xc001173180) Stream added, broadcasting: 5 I0402 00:35:12.023303 7 log.go:172] (0xc00261c0b0) Reply frame received for 5 I0402 00:35:12.099023 7 log.go:172] (0xc00261c0b0) Data frame received for 5 I0402 00:35:12.099057 7 log.go:172] (0xc001173180) (5) Data frame handling I0402 00:35:12.099083 7 log.go:172] (0xc00261c0b0) Data frame received for 3 I0402 00:35:12.099109 7 log.go:172] (0xc000b67d60) (3) Data frame handling I0402 00:35:12.099128 7 log.go:172] (0xc000b67d60) (3) Data frame sent I0402 00:35:12.099143 7 log.go:172] (0xc00261c0b0) Data frame received for 3 I0402 00:35:12.099156 7 log.go:172] (0xc000b67d60) (3) Data frame handling I0402 00:35:12.100454 7 log.go:172] (0xc00261c0b0) Data frame received for 1 I0402 00:35:12.100474 7 log.go:172] (0xc00124bcc0) (1) Data frame handling I0402 00:35:12.100492 7 log.go:172] (0xc00124bcc0) (1) Data frame sent I0402 00:35:12.100506 7 log.go:172] (0xc00261c0b0) (0xc00124bcc0) Stream removed, broadcasting: 1 I0402 00:35:12.100519 7 log.go:172] (0xc00261c0b0) Go away received I0402 00:35:12.100643 7 log.go:172] (0xc00261c0b0) (0xc00124bcc0) Stream removed, broadcasting: 1 I0402 00:35:12.100659 7 log.go:172] (0xc00261c0b0) (0xc000b67d60) Stream removed, broadcasting: 3 I0402 00:35:12.100670 7 log.go:172] (0xc00261c0b0) (0xc001173180) Stream removed, broadcasting: 5 Apr 2 00:35:12.100: INFO: Exec stderr: "" Apr 2 00:35:12.100: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1803 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 00:35:12.100: INFO: >>> kubeConfig: /root/.kube/config I0402 00:35:12.128288 7 log.go:172] (0xc002ccc370) (0xc001e9c280) Create stream I0402 00:35:12.128319 7 log.go:172] (0xc002ccc370) (0xc001e9c280) Stream added, broadcasting: 1 I0402 00:35:12.131346 7 log.go:172] (0xc002ccc370) Reply frame received for 1 I0402 00:35:12.131386 7 log.go:172] (0xc002ccc370) (0xc000c720a0) Create stream I0402 00:35:12.131407 7 log.go:172] (0xc002ccc370) (0xc000c720a0) Stream added, broadcasting: 3 I0402 00:35:12.132409 7 log.go:172] (0xc002ccc370) Reply frame received for 3 I0402 00:35:12.132463 7 log.go:172] (0xc002ccc370) (0xc0011732c0) Create stream I0402 00:35:12.132476 7 log.go:172] (0xc002ccc370) (0xc0011732c0) Stream added, broadcasting: 5 I0402 00:35:12.133760 7 log.go:172] (0xc002ccc370) Reply frame received for 5 I0402 00:35:12.186504 7 log.go:172] (0xc002ccc370) Data frame received for 3 I0402 00:35:12.186533 7 log.go:172] (0xc000c720a0) (3) Data frame handling I0402 00:35:12.186541 7 log.go:172] (0xc000c720a0) (3) Data frame sent I0402 00:35:12.186546 7 log.go:172] (0xc002ccc370) Data frame received for 3 I0402 00:35:12.186550 7 log.go:172] (0xc000c720a0) (3) Data frame handling I0402 00:35:12.186585 7 log.go:172] (0xc002ccc370) Data frame received for 5 I0402 00:35:12.186629 7 log.go:172] (0xc0011732c0) (5) Data frame handling I0402 00:35:12.188552 7 log.go:172] (0xc002ccc370) Data frame received for 1 I0402 00:35:12.188572 7 log.go:172] (0xc001e9c280) (1) Data frame handling I0402 00:35:12.188589 7 log.go:172] (0xc001e9c280) (1) Data frame sent I0402 00:35:12.188605 7 log.go:172] (0xc002ccc370) (0xc001e9c280) Stream removed, broadcasting: 1 I0402 00:35:12.188619 7 log.go:172] (0xc002ccc370) Go away received I0402 00:35:12.188735 7 log.go:172] (0xc002ccc370) (0xc001e9c280) Stream removed, broadcasting: 1 I0402 00:35:12.188779 7 log.go:172] (0xc002ccc370) (0xc000c720a0) Stream removed, broadcasting: 3 I0402 00:35:12.188808 7 log.go:172] (0xc002ccc370) (0xc0011732c0) Stream removed, broadcasting: 5 Apr 2 00:35:12.188: INFO: Exec stderr: "" Apr 2 00:35:12.188: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1803 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 00:35:12.188: INFO: >>> kubeConfig: /root/.kube/config I0402 00:35:12.229659 7 log.go:172] (0xc006209290) (0xc001173860) Create stream I0402 00:35:12.229684 7 log.go:172] (0xc006209290) (0xc001173860) Stream added, broadcasting: 1 I0402 00:35:12.231705 7 log.go:172] (0xc006209290) Reply frame received for 1 I0402 00:35:12.231732 7 log.go:172] (0xc006209290) (0xc001de2000) Create stream I0402 00:35:12.231742 7 log.go:172] (0xc006209290) (0xc001de2000) Stream added, broadcasting: 3 I0402 00:35:12.232599 7 log.go:172] (0xc006209290) Reply frame received for 3 I0402 00:35:12.232635 7 log.go:172] (0xc006209290) (0xc000c72140) Create stream I0402 00:35:12.232648 7 log.go:172] (0xc006209290) (0xc000c72140) Stream added, broadcasting: 5 I0402 00:35:12.233719 7 log.go:172] (0xc006209290) Reply frame received for 5 I0402 00:35:12.292569 7 log.go:172] (0xc006209290) Data frame received for 5 I0402 00:35:12.292690 7 log.go:172] (0xc000c72140) (5) Data frame handling I0402 00:35:12.292726 7 log.go:172] (0xc006209290) Data frame received for 3 I0402 00:35:12.292776 7 log.go:172] (0xc001de2000) (3) Data frame handling I0402 00:35:12.292812 7 log.go:172] (0xc001de2000) (3) Data frame sent I0402 00:35:12.292866 7 log.go:172] (0xc006209290) Data frame received for 3 I0402 00:35:12.292912 7 log.go:172] (0xc001de2000) (3) Data frame handling I0402 00:35:12.294427 7 log.go:172] (0xc006209290) Data frame received for 1 I0402 00:35:12.294450 7 log.go:172] (0xc001173860) (1) Data frame handling I0402 00:35:12.294463 7 log.go:172] (0xc001173860) (1) Data frame sent I0402 00:35:12.294636 7 log.go:172] (0xc006209290) (0xc001173860) Stream removed, broadcasting: 1 I0402 00:35:12.294712 7 log.go:172] (0xc006209290) Go away received I0402 00:35:12.294764 7 log.go:172] (0xc006209290) (0xc001173860) Stream removed, broadcasting: 1 I0402 00:35:12.294801 7 log.go:172] (0xc006209290) (0xc001de2000) Stream removed, broadcasting: 3 I0402 00:35:12.294818 7 log.go:172] (0xc006209290) (0xc000c72140) Stream removed, broadcasting: 5 Apr 2 00:35:12.294: INFO: Exec stderr: "" Apr 2 00:35:12.294: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1803 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 2 00:35:12.294: INFO: >>> kubeConfig: /root/.kube/config I0402 00:35:12.329968 7 log.go:172] (0xc00288a790) (0xc000c723c0) Create stream I0402 00:35:12.329995 7 log.go:172] (0xc00288a790) (0xc000c723c0) Stream added, broadcasting: 1 I0402 00:35:12.332903 7 log.go:172] (0xc00288a790) Reply frame received for 1 I0402 00:35:12.332938 7 log.go:172] (0xc00288a790) (0xc001de2140) Create stream I0402 00:35:12.332951 7 log.go:172] (0xc00288a790) (0xc001de2140) Stream added, broadcasting: 3 I0402 00:35:12.334077 7 log.go:172] (0xc00288a790) Reply frame received for 3 I0402 00:35:12.334114 7 log.go:172] (0xc00288a790) (0xc001de2280) Create stream I0402 00:35:12.334127 7 log.go:172] (0xc00288a790) (0xc001de2280) Stream added, broadcasting: 5 I0402 00:35:12.334943 7 log.go:172] (0xc00288a790) Reply frame received for 5 I0402 00:35:12.394931 7 log.go:172] (0xc00288a790) Data frame received for 5 I0402 00:35:12.394993 7 log.go:172] (0xc001de2280) (5) Data frame handling I0402 00:35:12.395037 7 log.go:172] (0xc00288a790) Data frame received for 3 I0402 00:35:12.395059 7 log.go:172] (0xc001de2140) (3) Data frame handling I0402 00:35:12.395075 7 log.go:172] (0xc001de2140) (3) Data frame sent I0402 00:35:12.395090 7 log.go:172] (0xc00288a790) Data frame received for 3 I0402 00:35:12.395109 7 log.go:172] (0xc001de2140) (3) Data frame handling I0402 00:35:12.396679 7 log.go:172] (0xc00288a790) Data frame received for 1 I0402 00:35:12.396763 7 log.go:172] (0xc000c723c0) (1) Data frame handling I0402 00:35:12.396843 7 log.go:172] (0xc000c723c0) (1) Data frame sent I0402 00:35:12.396913 7 log.go:172] (0xc00288a790) (0xc000c723c0) Stream removed, broadcasting: 1 I0402 00:35:12.396956 7 log.go:172] (0xc00288a790) Go away received I0402 00:35:12.397272 7 log.go:172] (0xc00288a790) (0xc000c723c0) Stream removed, broadcasting: 1 I0402 00:35:12.397312 7 log.go:172] (0xc00288a790) (0xc001de2140) Stream removed, broadcasting: 3 I0402 00:35:12.397339 7 log.go:172] (0xc00288a790) (0xc001de2280) Stream removed, broadcasting: 5 Apr 2 00:35:12.397: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:35:12.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-1803" for this suite. • [SLOW TEST:11.113 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":190,"skipped":3197,"failed":0} S ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:35:12.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Starting the proxy Apr 2 00:35:12.470: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix854213774/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:35:12.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9464" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":275,"completed":191,"skipped":3198,"failed":0} SSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:35:12.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Apr 2 00:35:17.139: INFO: Successfully updated pod "adopt-release-hnh25" STEP: Checking that the Job readopts the Pod Apr 2 00:35:17.139: INFO: Waiting up to 15m0s for pod "adopt-release-hnh25" in namespace "job-8250" to be "adopted" Apr 2 00:35:17.157: INFO: Pod "adopt-release-hnh25": Phase="Running", Reason="", readiness=true. Elapsed: 17.891203ms Apr 2 00:35:19.161: INFO: Pod "adopt-release-hnh25": Phase="Running", Reason="", readiness=true. Elapsed: 2.021884434s Apr 2 00:35:19.161: INFO: Pod "adopt-release-hnh25" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Apr 2 00:35:19.669: INFO: Successfully updated pod "adopt-release-hnh25" STEP: Checking that the Job releases the Pod Apr 2 00:35:19.669: INFO: Waiting up to 15m0s for pod "adopt-release-hnh25" in namespace "job-8250" to be "released" Apr 2 00:35:19.674: INFO: Pod "adopt-release-hnh25": Phase="Running", Reason="", readiness=true. Elapsed: 4.840482ms Apr 2 00:35:21.678: INFO: Pod "adopt-release-hnh25": Phase="Running", Reason="", readiness=true. Elapsed: 2.008674171s Apr 2 00:35:21.678: INFO: Pod "adopt-release-hnh25" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:35:21.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8250" for this suite. • [SLOW TEST:9.129 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":192,"skipped":3202,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:35:21.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:35:21.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7671" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":193,"skipped":3218,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:35:21.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 2 00:35:21.845: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4a68cfa3-03ff-442e-a196-03931956cb75" in namespace "projected-2119" to be "Succeeded or Failed" Apr 2 00:35:21.919: INFO: Pod "downwardapi-volume-4a68cfa3-03ff-442e-a196-03931956cb75": Phase="Pending", Reason="", readiness=false. Elapsed: 73.396575ms Apr 2 00:35:23.923: INFO: Pod "downwardapi-volume-4a68cfa3-03ff-442e-a196-03931956cb75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077179034s Apr 2 00:35:25.927: INFO: Pod "downwardapi-volume-4a68cfa3-03ff-442e-a196-03931956cb75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.081750787s STEP: Saw pod success Apr 2 00:35:25.927: INFO: Pod "downwardapi-volume-4a68cfa3-03ff-442e-a196-03931956cb75" satisfied condition "Succeeded or Failed" Apr 2 00:35:25.930: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-4a68cfa3-03ff-442e-a196-03931956cb75 container client-container: STEP: delete the pod Apr 2 00:35:25.988: INFO: Waiting for pod downwardapi-volume-4a68cfa3-03ff-442e-a196-03931956cb75 to disappear Apr 2 00:35:25.992: INFO: Pod downwardapi-volume-4a68cfa3-03ff-442e-a196-03931956cb75 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:35:25.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2119" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":194,"skipped":3228,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:35:25.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 2 00:35:26.054: INFO: Waiting up to 5m0s for pod "downward-api-d4f59d2a-42bf-440e-94fc-75857d4f317c" in namespace "downward-api-2277" to be "Succeeded or Failed" Apr 2 00:35:26.076: INFO: Pod "downward-api-d4f59d2a-42bf-440e-94fc-75857d4f317c": Phase="Pending", Reason="", readiness=false. Elapsed: 21.845668ms Apr 2 00:35:28.080: INFO: Pod "downward-api-d4f59d2a-42bf-440e-94fc-75857d4f317c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025979261s Apr 2 00:35:30.084: INFO: Pod "downward-api-d4f59d2a-42bf-440e-94fc-75857d4f317c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029806963s STEP: Saw pod success Apr 2 00:35:30.084: INFO: Pod "downward-api-d4f59d2a-42bf-440e-94fc-75857d4f317c" satisfied condition "Succeeded or Failed" Apr 2 00:35:30.088: INFO: Trying to get logs from node latest-worker2 pod downward-api-d4f59d2a-42bf-440e-94fc-75857d4f317c container dapi-container: STEP: delete the pod Apr 2 00:35:30.106: INFO: Waiting for pod downward-api-d4f59d2a-42bf-440e-94fc-75857d4f317c to disappear Apr 2 00:35:30.109: INFO: Pod downward-api-d4f59d2a-42bf-440e-94fc-75857d4f317c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:35:30.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2277" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":195,"skipped":3242,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:35:30.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-2123bd00-3cb4-4397-a8d8-30490117fbba STEP: Creating a pod to test consume configMaps Apr 2 00:35:30.238: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0a9b1ff3-2d69-426f-b25b-ced99aaa1b12" in namespace "projected-462" to be "Succeeded or Failed" Apr 2 00:35:30.241: INFO: Pod "pod-projected-configmaps-0a9b1ff3-2d69-426f-b25b-ced99aaa1b12": Phase="Pending", Reason="", readiness=false. Elapsed: 3.167454ms Apr 2 00:35:32.280: INFO: Pod "pod-projected-configmaps-0a9b1ff3-2d69-426f-b25b-ced99aaa1b12": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041466801s Apr 2 00:35:34.284: INFO: Pod "pod-projected-configmaps-0a9b1ff3-2d69-426f-b25b-ced99aaa1b12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045674309s STEP: Saw pod success Apr 2 00:35:34.284: INFO: Pod "pod-projected-configmaps-0a9b1ff3-2d69-426f-b25b-ced99aaa1b12" satisfied condition "Succeeded or Failed" Apr 2 00:35:34.287: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-0a9b1ff3-2d69-426f-b25b-ced99aaa1b12 container projected-configmap-volume-test: STEP: delete the pod Apr 2 00:35:34.309: INFO: Waiting for pod pod-projected-configmaps-0a9b1ff3-2d69-426f-b25b-ced99aaa1b12 to disappear Apr 2 00:35:34.313: INFO: Pod pod-projected-configmaps-0a9b1ff3-2d69-426f-b25b-ced99aaa1b12 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:35:34.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-462" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":196,"skipped":3277,"failed":0} S ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:35:34.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 00:35:34.369: INFO: Waiting up to 5m0s for pod "busybox-user-65534-842736bb-1300-4149-bd60-631a4185a823" in namespace "security-context-test-4097" to be "Succeeded or Failed" Apr 2 00:35:34.411: INFO: Pod "busybox-user-65534-842736bb-1300-4149-bd60-631a4185a823": Phase="Pending", Reason="", readiness=false. Elapsed: 41.951284ms Apr 2 00:35:36.415: INFO: Pod "busybox-user-65534-842736bb-1300-4149-bd60-631a4185a823": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04586249s Apr 2 00:35:38.420: INFO: Pod "busybox-user-65534-842736bb-1300-4149-bd60-631a4185a823": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050230669s Apr 2 00:35:38.420: INFO: Pod "busybox-user-65534-842736bb-1300-4149-bd60-631a4185a823" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:35:38.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4097" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":197,"skipped":3278,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:35:38.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 00:35:38.519: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 2 00:35:38.526: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:35:38.531: INFO: Number of nodes with available pods: 0 Apr 2 00:35:38.531: INFO: Node latest-worker is running more than one daemon pod Apr 2 00:35:39.616: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:35:39.633: INFO: Number of nodes with available pods: 0 Apr 2 00:35:39.633: INFO: Node latest-worker is running more than one daemon pod Apr 2 00:35:40.536: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:35:40.539: INFO: Number of nodes with available pods: 0 Apr 2 00:35:40.539: INFO: Node latest-worker is running more than one daemon pod Apr 2 00:35:41.535: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:35:41.539: INFO: Number of nodes with available pods: 0 Apr 2 00:35:41.539: INFO: Node latest-worker is running more than one daemon pod Apr 2 00:35:42.536: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:35:42.540: INFO: Number of nodes with available pods: 2 Apr 2 00:35:42.540: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 2 00:35:42.586: INFO: Wrong image for pod: daemon-set-bdl64. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 00:35:42.586: INFO: Wrong image for pod: daemon-set-ds86f. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 00:35:42.628: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:35:43.631: INFO: Wrong image for pod: daemon-set-bdl64. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 00:35:43.631: INFO: Wrong image for pod: daemon-set-ds86f. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 00:35:43.635: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:35:44.633: INFO: Wrong image for pod: daemon-set-bdl64. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 00:35:44.633: INFO: Wrong image for pod: daemon-set-ds86f. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 00:35:44.637: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:35:45.633: INFO: Wrong image for pod: daemon-set-bdl64. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 00:35:45.633: INFO: Wrong image for pod: daemon-set-ds86f. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 00:35:45.633: INFO: Pod daemon-set-ds86f is not available Apr 2 00:35:45.637: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:35:46.632: INFO: Wrong image for pod: daemon-set-bdl64. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 00:35:46.633: INFO: Wrong image for pod: daemon-set-ds86f. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 00:35:46.633: INFO: Pod daemon-set-ds86f is not available Apr 2 00:35:46.637: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:35:47.633: INFO: Wrong image for pod: daemon-set-bdl64. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 00:35:47.633: INFO: Wrong image for pod: daemon-set-ds86f. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 00:35:47.633: INFO: Pod daemon-set-ds86f is not available Apr 2 00:35:47.637: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:35:48.632: INFO: Wrong image for pod: daemon-set-bdl64. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 00:35:48.632: INFO: Wrong image for pod: daemon-set-ds86f. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 00:35:48.632: INFO: Pod daemon-set-ds86f is not available Apr 2 00:35:48.635: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:35:49.650: INFO: Wrong image for pod: daemon-set-bdl64. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 00:35:49.650: INFO: Wrong image for pod: daemon-set-ds86f. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 00:35:49.650: INFO: Pod daemon-set-ds86f is not available Apr 2 00:35:49.654: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:35:50.632: INFO: Wrong image for pod: daemon-set-bdl64. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 00:35:50.632: INFO: Wrong image for pod: daemon-set-ds86f. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 00:35:50.632: INFO: Pod daemon-set-ds86f is not available Apr 2 00:35:50.636: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:35:51.633: INFO: Wrong image for pod: daemon-set-bdl64. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 00:35:51.633: INFO: Wrong image for pod: daemon-set-ds86f. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 00:35:51.633: INFO: Pod daemon-set-ds86f is not available Apr 2 00:35:51.637: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:35:52.632: INFO: Wrong image for pod: daemon-set-bdl64. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 00:35:52.632: INFO: Wrong image for pod: daemon-set-ds86f. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 00:35:52.632: INFO: Pod daemon-set-ds86f is not available Apr 2 00:35:52.637: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:35:53.633: INFO: Wrong image for pod: daemon-set-bdl64. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 00:35:53.633: INFO: Pod daemon-set-rf5mt is not available Apr 2 00:35:53.637: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:35:54.633: INFO: Wrong image for pod: daemon-set-bdl64. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 00:35:54.633: INFO: Pod daemon-set-rf5mt is not available Apr 2 00:35:54.637: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:35:55.631: INFO: Wrong image for pod: daemon-set-bdl64. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 00:35:55.635: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:35:56.633: INFO: Wrong image for pod: daemon-set-bdl64. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 00:35:56.637: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:35:57.652: INFO: Wrong image for pod: daemon-set-bdl64. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 00:35:57.652: INFO: Pod daemon-set-bdl64 is not available Apr 2 00:35:57.656: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:35:58.633: INFO: Wrong image for pod: daemon-set-bdl64. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 00:35:58.633: INFO: Pod daemon-set-bdl64 is not available Apr 2 00:35:58.637: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:35:59.632: INFO: Wrong image for pod: daemon-set-bdl64. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 00:35:59.632: INFO: Pod daemon-set-bdl64 is not available Apr 2 00:35:59.636: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:36:00.633: INFO: Wrong image for pod: daemon-set-bdl64. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 00:36:00.633: INFO: Pod daemon-set-bdl64 is not available Apr 2 00:36:00.637: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:36:01.633: INFO: Wrong image for pod: daemon-set-bdl64. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 00:36:01.633: INFO: Pod daemon-set-bdl64 is not available Apr 2 00:36:01.637: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:36:02.633: INFO: Wrong image for pod: daemon-set-bdl64. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 2 00:36:02.633: INFO: Pod daemon-set-bdl64 is not available Apr 2 00:36:02.637: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:36:03.632: INFO: Pod daemon-set-qc9br is not available Apr 2 00:36:03.636: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 2 00:36:03.640: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:36:03.643: INFO: Number of nodes with available pods: 1 Apr 2 00:36:03.643: INFO: Node latest-worker2 is running more than one daemon pod Apr 2 00:36:04.647: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:36:04.650: INFO: Number of nodes with available pods: 1 Apr 2 00:36:04.650: INFO: Node latest-worker2 is running more than one daemon pod Apr 2 00:36:05.647: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:36:05.650: INFO: Number of nodes with available pods: 1 Apr 2 00:36:05.650: INFO: Node latest-worker2 is running more than one daemon pod Apr 2 00:36:06.648: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:36:06.652: INFO: Number of nodes with available pods: 2 Apr 2 00:36:06.652: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-983, will wait for the garbage collector to delete the pods Apr 2 00:36:06.726: INFO: Deleting DaemonSet.extensions daemon-set took: 8.119593ms Apr 2 00:36:07.026: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.285809ms Apr 2 00:36:13.029: INFO: Number of nodes with available pods: 0 Apr 2 00:36:13.029: INFO: Number of running nodes: 0, number of available pods: 0 Apr 2 00:36:13.031: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-983/daemonsets","resourceVersion":"4677100"},"items":null} Apr 2 00:36:13.033: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-983/pods","resourceVersion":"4677100"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:36:13.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-983" for this suite. • [SLOW TEST:34.622 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":198,"skipped":3288,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:36:13.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-218a1476-8208-4306-b7ce-8fb0f3ee2782 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:36:17.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7869" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":199,"skipped":3305,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:36:17.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 2 00:36:17.760: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 2 00:36:19.771: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721384577, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721384577, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721384577, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721384577, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 2 00:36:22.797: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 00:36:22.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:36:23.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-325" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:6.946 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":200,"skipped":3331,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:36:24.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206 STEP: creating the pod Apr 2 00:36:24.221: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-245' Apr 2 00:36:24.789: INFO: stderr: "" Apr 2 00:36:24.789: INFO: stdout: "pod/pause created\n" Apr 2 00:36:24.789: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 2 00:36:24.789: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-245" to be "running and ready" Apr 2 00:36:24.892: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 102.626327ms Apr 2 00:36:26.902: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113270234s Apr 2 00:36:28.906: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.117201924s Apr 2 00:36:28.906: INFO: Pod "pause" satisfied condition "running and ready" Apr 2 00:36:28.906: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: adding the label testing-label with value testing-label-value to a pod Apr 2 00:36:28.906: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-245' Apr 2 00:36:28.999: INFO: stderr: "" Apr 2 00:36:28.999: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 2 00:36:28.999: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-245' Apr 2 00:36:29.099: INFO: stderr: "" Apr 2 00:36:29.099: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 2 00:36:29.099: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-245' Apr 2 00:36:29.213: INFO: stderr: "" Apr 2 00:36:29.213: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 2 00:36:29.213: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-245' Apr 2 00:36:29.299: INFO: stderr: "" Apr 2 00:36:29.299: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213 STEP: using delete to clean up resources Apr 2 00:36:29.299: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-245' Apr 2 00:36:29.440: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 2 00:36:29.440: INFO: stdout: "pod \"pause\" force deleted\n" Apr 2 00:36:29.440: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-245' Apr 2 00:36:29.534: INFO: stderr: "No resources found in kubectl-245 namespace.\n" Apr 2 00:36:29.534: INFO: stdout: "" Apr 2 00:36:29.535: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-245 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 2 00:36:29.760: INFO: stderr: "" Apr 2 00:36:29.760: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:36:29.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-245" for this suite. • [SLOW TEST:5.681 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":275,"completed":201,"skipped":3364,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:36:29.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-2633229b-5738-44ec-b1ef-9077d23e7ec5 STEP: Creating a pod to test consume configMaps Apr 2 00:36:29.952: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fd9077f7-ca50-4557-9663-fa48f3f6082c" in namespace "projected-1916" to be "Succeeded or Failed" Apr 2 00:36:29.956: INFO: Pod "pod-projected-configmaps-fd9077f7-ca50-4557-9663-fa48f3f6082c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.214058ms Apr 2 00:36:31.959: INFO: Pod "pod-projected-configmaps-fd9077f7-ca50-4557-9663-fa48f3f6082c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007733093s Apr 2 00:36:33.962: INFO: Pod "pod-projected-configmaps-fd9077f7-ca50-4557-9663-fa48f3f6082c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010803755s STEP: Saw pod success Apr 2 00:36:33.962: INFO: Pod "pod-projected-configmaps-fd9077f7-ca50-4557-9663-fa48f3f6082c" satisfied condition "Succeeded or Failed" Apr 2 00:36:33.999: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-fd9077f7-ca50-4557-9663-fa48f3f6082c container projected-configmap-volume-test: STEP: delete the pod Apr 2 00:36:34.026: INFO: Waiting for pod pod-projected-configmaps-fd9077f7-ca50-4557-9663-fa48f3f6082c to disappear Apr 2 00:36:34.030: INFO: Pod pod-projected-configmaps-fd9077f7-ca50-4557-9663-fa48f3f6082c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:36:34.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1916" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":202,"skipped":3366,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:36:34.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 2 00:36:42.156: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 2 00:36:42.158: INFO: Pod pod-with-prestop-http-hook still exists Apr 2 00:36:44.158: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 2 00:36:44.163: INFO: Pod pod-with-prestop-http-hook still exists Apr 2 00:36:46.158: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 2 00:36:46.162: INFO: Pod pod-with-prestop-http-hook still exists Apr 2 00:36:48.158: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 2 00:36:48.167: INFO: Pod pod-with-prestop-http-hook still exists Apr 2 00:36:50.158: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 2 00:36:50.162: INFO: Pod pod-with-prestop-http-hook still exists Apr 2 00:36:52.158: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 2 00:36:52.163: INFO: Pod pod-with-prestop-http-hook still exists Apr 2 00:36:54.158: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 2 00:36:54.163: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:36:54.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3224" for this suite. • [SLOW TEST:20.140 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":203,"skipped":3460,"failed":0} [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:36:54.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-7957 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating stateful set ss in namespace statefulset-7957 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7957 Apr 2 00:36:54.316: INFO: Found 0 stateful pods, waiting for 1 Apr 2 00:37:04.321: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 2 00:37:04.325: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7957 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 2 00:37:04.568: INFO: stderr: "I0402 00:37:04.449022 2606 log.go:172] (0xc0006f8a50) (0xc00072a0a0) Create stream\nI0402 00:37:04.449240 2606 log.go:172] (0xc0006f8a50) (0xc00072a0a0) Stream added, broadcasting: 1\nI0402 00:37:04.452082 2606 log.go:172] (0xc0006f8a50) Reply frame received for 1\nI0402 00:37:04.452154 2606 log.go:172] (0xc0006f8a50) (0xc00072a1e0) Create stream\nI0402 00:37:04.452176 2606 log.go:172] (0xc0006f8a50) (0xc00072a1e0) Stream added, broadcasting: 3\nI0402 00:37:04.453621 2606 log.go:172] (0xc0006f8a50) Reply frame received for 3\nI0402 00:37:04.453667 2606 log.go:172] (0xc0006f8a50) (0xc00072a280) Create stream\nI0402 00:37:04.453681 2606 log.go:172] (0xc0006f8a50) (0xc00072a280) Stream added, broadcasting: 5\nI0402 00:37:04.454683 2606 log.go:172] (0xc0006f8a50) Reply frame received for 5\nI0402 00:37:04.526208 2606 log.go:172] (0xc0006f8a50) Data frame received for 5\nI0402 00:37:04.526237 2606 log.go:172] (0xc00072a280) (5) Data frame handling\nI0402 00:37:04.526255 2606 log.go:172] (0xc00072a280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0402 00:37:04.561981 2606 log.go:172] (0xc0006f8a50) Data frame received for 5\nI0402 00:37:04.562015 2606 log.go:172] (0xc00072a280) (5) Data frame handling\nI0402 00:37:04.562054 2606 log.go:172] (0xc0006f8a50) Data frame received for 3\nI0402 00:37:04.562111 2606 log.go:172] (0xc00072a1e0) (3) Data frame handling\nI0402 00:37:04.562156 2606 log.go:172] (0xc00072a1e0) (3) Data frame sent\nI0402 00:37:04.562179 2606 log.go:172] (0xc0006f8a50) Data frame received for 3\nI0402 00:37:04.562197 2606 log.go:172] (0xc00072a1e0) (3) Data frame handling\nI0402 00:37:04.564184 2606 log.go:172] (0xc0006f8a50) Data frame received for 1\nI0402 00:37:04.564217 2606 log.go:172] (0xc00072a0a0) (1) Data frame handling\nI0402 00:37:04.564238 2606 log.go:172] (0xc00072a0a0) (1) Data frame sent\nI0402 00:37:04.564267 2606 log.go:172] (0xc0006f8a50) (0xc00072a0a0) Stream removed, broadcasting: 1\nI0402 00:37:04.564327 2606 log.go:172] (0xc0006f8a50) Go away received\nI0402 00:37:04.564692 2606 log.go:172] (0xc0006f8a50) (0xc00072a0a0) Stream removed, broadcasting: 1\nI0402 00:37:04.564723 2606 log.go:172] (0xc0006f8a50) (0xc00072a1e0) Stream removed, broadcasting: 3\nI0402 00:37:04.564744 2606 log.go:172] (0xc0006f8a50) (0xc00072a280) Stream removed, broadcasting: 5\n" Apr 2 00:37:04.568: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 2 00:37:04.568: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 2 00:37:04.571: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 2 00:37:14.576: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 2 00:37:14.576: INFO: Waiting for statefulset status.replicas updated to 0 Apr 2 00:37:14.594: INFO: POD NODE PHASE GRACE CONDITIONS Apr 2 00:37:14.594: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:36:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:36:54 +0000 UTC }] Apr 2 00:37:14.594: INFO: Apr 2 00:37:14.594: INFO: StatefulSet ss has not reached scale 3, at 1 Apr 2 00:37:15.599: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990212002s Apr 2 00:37:16.637: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.98580321s Apr 2 00:37:17.641: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.947355255s Apr 2 00:37:18.646: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.943329799s Apr 2 00:37:19.651: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.938336079s Apr 2 00:37:20.656: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.933413852s Apr 2 00:37:21.661: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.92855135s Apr 2 00:37:22.666: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.923414235s Apr 2 00:37:23.671: INFO: Verifying statefulset ss doesn't scale past 3 for another 918.573865ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7957 Apr 2 00:37:24.676: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7957 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 2 00:37:24.890: INFO: stderr: "I0402 00:37:24.821830 2628 log.go:172] (0xc00003a840) (0xc0008f4000) Create stream\nI0402 00:37:24.821902 2628 log.go:172] (0xc00003a840) (0xc0008f4000) Stream added, broadcasting: 1\nI0402 00:37:24.824470 2628 log.go:172] (0xc00003a840) Reply frame received for 1\nI0402 00:37:24.824510 2628 log.go:172] (0xc00003a840) (0xc000bb8000) Create stream\nI0402 00:37:24.824519 2628 log.go:172] (0xc00003a840) (0xc000bb8000) Stream added, broadcasting: 3\nI0402 00:37:24.825657 2628 log.go:172] (0xc00003a840) Reply frame received for 3\nI0402 00:37:24.825687 2628 log.go:172] (0xc00003a840) (0xc000bb80a0) Create stream\nI0402 00:37:24.825695 2628 log.go:172] (0xc00003a840) (0xc000bb80a0) Stream added, broadcasting: 5\nI0402 00:37:24.826396 2628 log.go:172] (0xc00003a840) Reply frame received for 5\nI0402 00:37:24.884936 2628 log.go:172] (0xc00003a840) Data frame received for 3\nI0402 00:37:24.884978 2628 log.go:172] (0xc000bb8000) (3) Data frame handling\nI0402 00:37:24.884993 2628 log.go:172] (0xc000bb8000) (3) Data frame sent\nI0402 00:37:24.885002 2628 log.go:172] (0xc00003a840) Data frame received for 3\nI0402 00:37:24.885021 2628 log.go:172] (0xc000bb8000) (3) Data frame handling\nI0402 00:37:24.885060 2628 log.go:172] (0xc00003a840) Data frame received for 5\nI0402 00:37:24.885087 2628 log.go:172] (0xc000bb80a0) (5) Data frame handling\nI0402 00:37:24.885104 2628 log.go:172] (0xc000bb80a0) (5) Data frame sent\nI0402 00:37:24.885204 2628 log.go:172] (0xc00003a840) Data frame received for 5\nI0402 00:37:24.885216 2628 log.go:172] (0xc000bb80a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0402 00:37:24.886457 2628 log.go:172] (0xc00003a840) Data frame received for 1\nI0402 00:37:24.886471 2628 log.go:172] (0xc0008f4000) (1) Data frame handling\nI0402 00:37:24.886479 2628 log.go:172] (0xc0008f4000) (1) Data frame sent\nI0402 00:37:24.886490 2628 log.go:172] (0xc00003a840) (0xc0008f4000) Stream removed, broadcasting: 1\nI0402 00:37:24.886726 2628 log.go:172] (0xc00003a840) Go away received\nI0402 00:37:24.886752 2628 log.go:172] (0xc00003a840) (0xc0008f4000) Stream removed, broadcasting: 1\nI0402 00:37:24.886766 2628 log.go:172] (0xc00003a840) (0xc000bb8000) Stream removed, broadcasting: 3\nI0402 00:37:24.886784 2628 log.go:172] (0xc00003a840) (0xc000bb80a0) Stream removed, broadcasting: 5\n" Apr 2 00:37:24.890: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 2 00:37:24.890: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 2 00:37:24.890: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 2 00:37:25.114: INFO: stderr: "I0402 00:37:25.042917 2650 log.go:172] (0xc00003a840) (0xc0004552c0) Create stream\nI0402 00:37:25.042974 2650 log.go:172] (0xc00003a840) (0xc0004552c0) Stream added, broadcasting: 1\nI0402 00:37:25.045375 2650 log.go:172] (0xc00003a840) Reply frame received for 1\nI0402 00:37:25.045422 2650 log.go:172] (0xc00003a840) (0xc000863ae0) Create stream\nI0402 00:37:25.045432 2650 log.go:172] (0xc00003a840) (0xc000863ae0) Stream added, broadcasting: 3\nI0402 00:37:25.046199 2650 log.go:172] (0xc00003a840) Reply frame received for 3\nI0402 00:37:25.046232 2650 log.go:172] (0xc00003a840) (0xc000816000) Create stream\nI0402 00:37:25.046244 2650 log.go:172] (0xc00003a840) (0xc000816000) Stream added, broadcasting: 5\nI0402 00:37:25.047137 2650 log.go:172] (0xc00003a840) Reply frame received for 5\nI0402 00:37:25.107894 2650 log.go:172] (0xc00003a840) Data frame received for 3\nI0402 00:37:25.107936 2650 log.go:172] (0xc000863ae0) (3) Data frame handling\nI0402 00:37:25.107963 2650 log.go:172] (0xc00003a840) Data frame received for 5\nI0402 00:37:25.107997 2650 log.go:172] (0xc000816000) (5) Data frame handling\nI0402 00:37:25.108029 2650 log.go:172] (0xc000816000) (5) Data frame sent\nI0402 00:37:25.108051 2650 log.go:172] (0xc00003a840) Data frame received for 5\nI0402 00:37:25.108064 2650 log.go:172] (0xc000816000) (5) Data frame handling\nI0402 00:37:25.108084 2650 log.go:172] (0xc000863ae0) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0402 00:37:25.108113 2650 log.go:172] (0xc00003a840) Data frame received for 3\nI0402 00:37:25.108123 2650 log.go:172] (0xc000863ae0) (3) Data frame handling\nI0402 00:37:25.109723 2650 log.go:172] (0xc00003a840) Data frame received for 1\nI0402 00:37:25.109740 2650 log.go:172] (0xc0004552c0) (1) Data frame handling\nI0402 00:37:25.109748 2650 log.go:172] (0xc0004552c0) (1) Data frame sent\nI0402 00:37:25.109758 2650 log.go:172] (0xc00003a840) (0xc0004552c0) Stream removed, broadcasting: 1\nI0402 00:37:25.109780 2650 log.go:172] (0xc00003a840) Go away received\nI0402 00:37:25.110152 2650 log.go:172] (0xc00003a840) (0xc0004552c0) Stream removed, broadcasting: 1\nI0402 00:37:25.110175 2650 log.go:172] (0xc00003a840) (0xc000863ae0) Stream removed, broadcasting: 3\nI0402 00:37:25.110187 2650 log.go:172] (0xc00003a840) (0xc000816000) Stream removed, broadcasting: 5\n" Apr 2 00:37:25.114: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 2 00:37:25.114: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 2 00:37:25.114: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7957 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 2 00:37:25.322: INFO: stderr: "I0402 00:37:25.242955 2670 log.go:172] (0xc000bea000) (0xc0008e2000) Create stream\nI0402 00:37:25.243004 2670 log.go:172] (0xc000bea000) (0xc0008e2000) Stream added, broadcasting: 1\nI0402 00:37:25.254779 2670 log.go:172] (0xc000bea000) Reply frame received for 1\nI0402 00:37:25.254839 2670 log.go:172] (0xc000bea000) (0xc0008e20a0) Create stream\nI0402 00:37:25.254854 2670 log.go:172] (0xc000bea000) (0xc0008e20a0) Stream added, broadcasting: 3\nI0402 00:37:25.256373 2670 log.go:172] (0xc000bea000) Reply frame received for 3\nI0402 00:37:25.256406 2670 log.go:172] (0xc000bea000) (0xc0008e2140) Create stream\nI0402 00:37:25.256428 2670 log.go:172] (0xc000bea000) (0xc0008e2140) Stream added, broadcasting: 5\nI0402 00:37:25.260274 2670 log.go:172] (0xc000bea000) Reply frame received for 5\nI0402 00:37:25.316733 2670 log.go:172] (0xc000bea000) Data frame received for 5\nI0402 00:37:25.316766 2670 log.go:172] (0xc000bea000) Data frame received for 3\nI0402 00:37:25.316794 2670 log.go:172] (0xc0008e20a0) (3) Data frame handling\nI0402 00:37:25.316820 2670 log.go:172] (0xc0008e20a0) (3) Data frame sent\nI0402 00:37:25.316830 2670 log.go:172] (0xc000bea000) Data frame received for 3\nI0402 00:37:25.316848 2670 log.go:172] (0xc0008e20a0) (3) Data frame handling\nI0402 00:37:25.316899 2670 log.go:172] (0xc0008e2140) (5) Data frame handling\nI0402 00:37:25.316928 2670 log.go:172] (0xc0008e2140) (5) Data frame sent\nI0402 00:37:25.316944 2670 log.go:172] (0xc000bea000) Data frame received for 5\nI0402 00:37:25.317026 2670 log.go:172] (0xc0008e2140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0402 00:37:25.318329 2670 log.go:172] (0xc000bea000) Data frame received for 1\nI0402 00:37:25.318360 2670 log.go:172] (0xc0008e2000) (1) Data frame handling\nI0402 00:37:25.318368 2670 log.go:172] (0xc0008e2000) (1) Data frame sent\nI0402 00:37:25.318382 2670 log.go:172] (0xc000bea000) (0xc0008e2000) Stream removed, broadcasting: 1\nI0402 00:37:25.318406 2670 log.go:172] (0xc000bea000) Go away received\nI0402 00:37:25.318771 2670 log.go:172] (0xc000bea000) (0xc0008e2000) Stream removed, broadcasting: 1\nI0402 00:37:25.318791 2670 log.go:172] (0xc000bea000) (0xc0008e20a0) Stream removed, broadcasting: 3\nI0402 00:37:25.318798 2670 log.go:172] (0xc000bea000) (0xc0008e2140) Stream removed, broadcasting: 5\n" Apr 2 00:37:25.323: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 2 00:37:25.323: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 2 00:37:25.332: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 2 00:37:25.332: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 2 00:37:25.332: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Apr 2 00:37:25.347: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7957 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 2 00:37:25.546: INFO: stderr: "I0402 00:37:25.471978 2693 log.go:172] (0xc00055e000) (0xc00094a000) Create stream\nI0402 00:37:25.472051 2693 log.go:172] (0xc00055e000) (0xc00094a000) Stream added, broadcasting: 1\nI0402 00:37:25.474672 2693 log.go:172] (0xc00055e000) Reply frame received for 1\nI0402 00:37:25.474716 2693 log.go:172] (0xc00055e000) (0xc00096e000) Create stream\nI0402 00:37:25.474729 2693 log.go:172] (0xc00055e000) (0xc00096e000) Stream added, broadcasting: 3\nI0402 00:37:25.475659 2693 log.go:172] (0xc00055e000) Reply frame received for 3\nI0402 00:37:25.475701 2693 log.go:172] (0xc00055e000) (0xc0008135e0) Create stream\nI0402 00:37:25.475711 2693 log.go:172] (0xc00055e000) (0xc0008135e0) Stream added, broadcasting: 5\nI0402 00:37:25.476861 2693 log.go:172] (0xc00055e000) Reply frame received for 5\nI0402 00:37:25.541272 2693 log.go:172] (0xc00055e000) Data frame received for 3\nI0402 00:37:25.541307 2693 log.go:172] (0xc00096e000) (3) Data frame handling\nI0402 00:37:25.541316 2693 log.go:172] (0xc00096e000) (3) Data frame sent\nI0402 00:37:25.541340 2693 log.go:172] (0xc00055e000) Data frame received for 3\nI0402 00:37:25.541346 2693 log.go:172] (0xc00096e000) (3) Data frame handling\nI0402 00:37:25.541362 2693 log.go:172] (0xc00055e000) Data frame received for 5\nI0402 00:37:25.541367 2693 log.go:172] (0xc0008135e0) (5) Data frame handling\nI0402 00:37:25.541376 2693 log.go:172] (0xc0008135e0) (5) Data frame sent\nI0402 00:37:25.541383 2693 log.go:172] (0xc00055e000) Data frame received for 5\nI0402 00:37:25.541391 2693 log.go:172] (0xc0008135e0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0402 00:37:25.542911 2693 log.go:172] (0xc00055e000) Data frame received for 1\nI0402 00:37:25.542923 2693 log.go:172] (0xc00094a000) (1) Data frame handling\nI0402 00:37:25.542929 2693 log.go:172] (0xc00094a000) (1) Data frame sent\nI0402 00:37:25.542937 2693 log.go:172] (0xc00055e000) (0xc00094a000) Stream removed, broadcasting: 1\nI0402 00:37:25.543003 2693 log.go:172] (0xc00055e000) Go away received\nI0402 00:37:25.543220 2693 log.go:172] (0xc00055e000) (0xc00094a000) Stream removed, broadcasting: 1\nI0402 00:37:25.543235 2693 log.go:172] (0xc00055e000) (0xc00096e000) Stream removed, broadcasting: 3\nI0402 00:37:25.543240 2693 log.go:172] (0xc00055e000) (0xc0008135e0) Stream removed, broadcasting: 5\n" Apr 2 00:37:25.546: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 2 00:37:25.546: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 2 00:37:25.546: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7957 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 2 00:37:25.776: INFO: stderr: "I0402 00:37:25.678303 2716 log.go:172] (0xc000b12580) (0xc00072a0a0) Create stream\nI0402 00:37:25.678384 2716 log.go:172] (0xc000b12580) (0xc00072a0a0) Stream added, broadcasting: 1\nI0402 00:37:25.681009 2716 log.go:172] (0xc000b12580) Reply frame received for 1\nI0402 00:37:25.681053 2716 log.go:172] (0xc000b12580) (0xc000693180) Create stream\nI0402 00:37:25.681068 2716 log.go:172] (0xc000b12580) (0xc000693180) Stream added, broadcasting: 3\nI0402 00:37:25.682253 2716 log.go:172] (0xc000b12580) Reply frame received for 3\nI0402 00:37:25.682287 2716 log.go:172] (0xc000b12580) (0xc00072a1e0) Create stream\nI0402 00:37:25.682294 2716 log.go:172] (0xc000b12580) (0xc00072a1e0) Stream added, broadcasting: 5\nI0402 00:37:25.683661 2716 log.go:172] (0xc000b12580) Reply frame received for 5\nI0402 00:37:25.741456 2716 log.go:172] (0xc000b12580) Data frame received for 5\nI0402 00:37:25.741503 2716 log.go:172] (0xc00072a1e0) (5) Data frame handling\nI0402 00:37:25.741547 2716 log.go:172] (0xc00072a1e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0402 00:37:25.768776 2716 log.go:172] (0xc000b12580) Data frame received for 3\nI0402 00:37:25.768813 2716 log.go:172] (0xc000693180) (3) Data frame handling\nI0402 00:37:25.768824 2716 log.go:172] (0xc000693180) (3) Data frame sent\nI0402 00:37:25.768853 2716 log.go:172] (0xc000b12580) Data frame received for 5\nI0402 00:37:25.768885 2716 log.go:172] (0xc00072a1e0) (5) Data frame handling\nI0402 00:37:25.769008 2716 log.go:172] (0xc000b12580) Data frame received for 3\nI0402 00:37:25.769043 2716 log.go:172] (0xc000693180) (3) Data frame handling\nI0402 00:37:25.771175 2716 log.go:172] (0xc000b12580) Data frame received for 1\nI0402 00:37:25.771215 2716 log.go:172] (0xc00072a0a0) (1) Data frame handling\nI0402 00:37:25.771238 2716 log.go:172] (0xc00072a0a0) (1) Data frame sent\nI0402 00:37:25.771261 2716 log.go:172] (0xc000b12580) (0xc00072a0a0) Stream removed, broadcasting: 1\nI0402 00:37:25.771296 2716 log.go:172] (0xc000b12580) Go away received\nI0402 00:37:25.771785 2716 log.go:172] (0xc000b12580) (0xc00072a0a0) Stream removed, broadcasting: 1\nI0402 00:37:25.771810 2716 log.go:172] (0xc000b12580) (0xc000693180) Stream removed, broadcasting: 3\nI0402 00:37:25.771823 2716 log.go:172] (0xc000b12580) (0xc00072a1e0) Stream removed, broadcasting: 5\n" Apr 2 00:37:25.776: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 2 00:37:25.776: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 2 00:37:25.776: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7957 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 2 00:37:26.010: INFO: stderr: "I0402 00:37:25.906853 2738 log.go:172] (0xc000a946e0) (0xc0009f2000) Create stream\nI0402 00:37:25.906916 2738 log.go:172] (0xc000a946e0) (0xc0009f2000) Stream added, broadcasting: 1\nI0402 00:37:25.909859 2738 log.go:172] (0xc000a946e0) Reply frame received for 1\nI0402 00:37:25.909889 2738 log.go:172] (0xc000a946e0) (0xc000a360a0) Create stream\nI0402 00:37:25.909901 2738 log.go:172] (0xc000a946e0) (0xc000a360a0) Stream added, broadcasting: 3\nI0402 00:37:25.910862 2738 log.go:172] (0xc000a946e0) Reply frame received for 3\nI0402 00:37:25.910901 2738 log.go:172] (0xc000a946e0) (0xc000aa83c0) Create stream\nI0402 00:37:25.910915 2738 log.go:172] (0xc000a946e0) (0xc000aa83c0) Stream added, broadcasting: 5\nI0402 00:37:25.911991 2738 log.go:172] (0xc000a946e0) Reply frame received for 5\nI0402 00:37:25.976301 2738 log.go:172] (0xc000a946e0) Data frame received for 5\nI0402 00:37:25.976330 2738 log.go:172] (0xc000aa83c0) (5) Data frame handling\nI0402 00:37:25.976352 2738 log.go:172] (0xc000aa83c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0402 00:37:26.003146 2738 log.go:172] (0xc000a946e0) Data frame received for 5\nI0402 00:37:26.003184 2738 log.go:172] (0xc000aa83c0) (5) Data frame handling\nI0402 00:37:26.003245 2738 log.go:172] (0xc000a946e0) Data frame received for 3\nI0402 00:37:26.003295 2738 log.go:172] (0xc000a360a0) (3) Data frame handling\nI0402 00:37:26.003345 2738 log.go:172] (0xc000a360a0) (3) Data frame sent\nI0402 00:37:26.003378 2738 log.go:172] (0xc000a946e0) Data frame received for 3\nI0402 00:37:26.003391 2738 log.go:172] (0xc000a360a0) (3) Data frame handling\nI0402 00:37:26.005511 2738 log.go:172] (0xc000a946e0) Data frame received for 1\nI0402 00:37:26.005551 2738 log.go:172] (0xc0009f2000) (1) Data frame handling\nI0402 00:37:26.005587 2738 log.go:172] (0xc0009f2000) (1) Data frame sent\nI0402 00:37:26.005682 2738 log.go:172] (0xc000a946e0) (0xc0009f2000) Stream removed, broadcasting: 1\nI0402 00:37:26.005780 2738 log.go:172] (0xc000a946e0) Go away received\nI0402 00:37:26.006139 2738 log.go:172] (0xc000a946e0) (0xc0009f2000) Stream removed, broadcasting: 1\nI0402 00:37:26.006165 2738 log.go:172] (0xc000a946e0) (0xc000a360a0) Stream removed, broadcasting: 3\nI0402 00:37:26.006186 2738 log.go:172] (0xc000a946e0) (0xc000aa83c0) Stream removed, broadcasting: 5\n" Apr 2 00:37:26.010: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 2 00:37:26.010: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 2 00:37:26.010: INFO: Waiting for statefulset status.replicas updated to 0 Apr 2 00:37:26.014: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 2 00:37:36.022: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 2 00:37:36.022: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 2 00:37:36.022: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 2 00:37:36.049: INFO: POD NODE PHASE GRACE CONDITIONS Apr 2 00:37:36.049: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:36:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:36:54 +0000 UTC }] Apr 2 00:37:36.049: INFO: ss-1 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:14 +0000 UTC }] Apr 2 00:37:36.049: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:14 +0000 UTC }] Apr 2 00:37:36.049: INFO: Apr 2 00:37:36.049: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 2 00:37:37.061: INFO: POD NODE PHASE GRACE CONDITIONS Apr 2 00:37:37.061: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:36:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:36:54 +0000 UTC }] Apr 2 00:37:37.061: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:14 +0000 UTC }] Apr 2 00:37:37.061: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:14 +0000 UTC }] Apr 2 00:37:37.061: INFO: Apr 2 00:37:37.061: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 2 00:37:38.066: INFO: POD NODE PHASE GRACE CONDITIONS Apr 2 00:37:38.066: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:36:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:36:54 +0000 UTC }] Apr 2 00:37:38.066: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:14 +0000 UTC }] Apr 2 00:37:38.066: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:14 +0000 UTC }] Apr 2 00:37:38.066: INFO: Apr 2 00:37:38.066: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 2 00:37:39.070: INFO: POD NODE PHASE GRACE CONDITIONS Apr 2 00:37:39.071: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:36:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:36:54 +0000 UTC }] Apr 2 00:37:39.071: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:14 +0000 UTC }] Apr 2 00:37:39.071: INFO: Apr 2 00:37:39.071: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 2 00:37:40.075: INFO: POD NODE PHASE GRACE CONDITIONS Apr 2 00:37:40.075: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:36:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:36:54 +0000 UTC }] Apr 2 00:37:40.075: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:14 +0000 UTC }] Apr 2 00:37:40.076: INFO: Apr 2 00:37:40.076: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 2 00:37:41.080: INFO: POD NODE PHASE GRACE CONDITIONS Apr 2 00:37:41.080: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:36:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:36:54 +0000 UTC }] Apr 2 00:37:41.080: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:14 +0000 UTC }] Apr 2 00:37:41.080: INFO: Apr 2 00:37:41.080: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 2 00:37:42.085: INFO: POD NODE PHASE GRACE CONDITIONS Apr 2 00:37:42.085: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:36:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:36:54 +0000 UTC }] Apr 2 00:37:42.085: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-02 00:37:14 +0000 UTC }] Apr 2 00:37:42.085: INFO: Apr 2 00:37:42.085: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 2 00:37:43.090: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.944457722s Apr 2 00:37:44.094: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.939593947s Apr 2 00:37:45.098: INFO: Verifying statefulset ss doesn't scale past 0 for another 935.872255ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7957 Apr 2 00:37:46.102: INFO: Scaling statefulset ss to 0 Apr 2 00:37:46.111: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 2 00:37:46.114: INFO: Deleting all statefulset in ns statefulset-7957 Apr 2 00:37:46.117: INFO: Scaling statefulset ss to 0 Apr 2 00:37:46.125: INFO: Waiting for statefulset status.replicas updated to 0 Apr 2 00:37:46.128: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:37:46.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7957" for this suite. • [SLOW TEST:51.973 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":204,"skipped":3460,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:37:46.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 2 00:37:46.209: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4721 /api/v1/namespaces/watch-4721/configmaps/e2e-watch-test-watch-closed 3ccdeba0-7d6c-4dfe-8d8c-408ca2686512 4677738 0 2020-04-02 00:37:46 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 2 00:37:46.209: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4721 /api/v1/namespaces/watch-4721/configmaps/e2e-watch-test-watch-closed 3ccdeba0-7d6c-4dfe-8d8c-408ca2686512 4677739 0 2020-04-02 00:37:46 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 2 00:37:46.237: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4721 /api/v1/namespaces/watch-4721/configmaps/e2e-watch-test-watch-closed 3ccdeba0-7d6c-4dfe-8d8c-408ca2686512 4677740 0 2020-04-02 00:37:46 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 2 00:37:46.237: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4721 /api/v1/namespaces/watch-4721/configmaps/e2e-watch-test-watch-closed 3ccdeba0-7d6c-4dfe-8d8c-408ca2686512 4677741 0 2020-04-02 00:37:46 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:37:46.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4721" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":205,"skipped":3470,"failed":0} S ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:37:46.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 2 00:37:46.298: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6e861bc1-0c93-44b0-b08e-5c4783ade9b2" in namespace "downward-api-4867" to be "Succeeded or Failed" Apr 2 00:37:46.315: INFO: Pod "downwardapi-volume-6e861bc1-0c93-44b0-b08e-5c4783ade9b2": Phase="Pending", Reason="", readiness=false. Elapsed: 17.013105ms Apr 2 00:37:48.319: INFO: Pod "downwardapi-volume-6e861bc1-0c93-44b0-b08e-5c4783ade9b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021069426s Apr 2 00:37:50.323: INFO: Pod "downwardapi-volume-6e861bc1-0c93-44b0-b08e-5c4783ade9b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025198569s STEP: Saw pod success Apr 2 00:37:50.323: INFO: Pod "downwardapi-volume-6e861bc1-0c93-44b0-b08e-5c4783ade9b2" satisfied condition "Succeeded or Failed" Apr 2 00:37:50.326: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-6e861bc1-0c93-44b0-b08e-5c4783ade9b2 container client-container: STEP: delete the pod Apr 2 00:37:50.391: INFO: Waiting for pod downwardapi-volume-6e861bc1-0c93-44b0-b08e-5c4783ade9b2 to disappear Apr 2 00:37:50.400: INFO: Pod downwardapi-volume-6e861bc1-0c93-44b0-b08e-5c4783ade9b2 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:37:50.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4867" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":206,"skipped":3471,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:37:50.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-2cf6d703-1d6d-4bf4-a377-368d26f9b72b STEP: Creating a pod to test consume secrets Apr 2 00:37:50.475: INFO: Waiting up to 5m0s for pod "pod-secrets-e7d2fa12-3288-4810-ade4-214ea674e41f" in namespace "secrets-6772" to be "Succeeded or Failed" Apr 2 00:37:50.478: INFO: Pod "pod-secrets-e7d2fa12-3288-4810-ade4-214ea674e41f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.602326ms Apr 2 00:37:52.498: INFO: Pod "pod-secrets-e7d2fa12-3288-4810-ade4-214ea674e41f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022080856s Apr 2 00:37:54.502: INFO: Pod "pod-secrets-e7d2fa12-3288-4810-ade4-214ea674e41f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026567286s STEP: Saw pod success Apr 2 00:37:54.502: INFO: Pod "pod-secrets-e7d2fa12-3288-4810-ade4-214ea674e41f" satisfied condition "Succeeded or Failed" Apr 2 00:37:54.505: INFO: Trying to get logs from node latest-worker pod pod-secrets-e7d2fa12-3288-4810-ade4-214ea674e41f container secret-volume-test: STEP: delete the pod Apr 2 00:37:54.533: INFO: Waiting for pod pod-secrets-e7d2fa12-3288-4810-ade4-214ea674e41f to disappear Apr 2 00:37:54.544: INFO: Pod pod-secrets-e7d2fa12-3288-4810-ade4-214ea674e41f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:37:54.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6772" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":207,"skipped":3494,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:37:54.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 2 00:37:54.604: INFO: Waiting up to 5m0s for pod "downwardapi-volume-87850af3-f03c-47ad-b8c6-020787e9ac3b" in namespace "downward-api-483" to be "Succeeded or Failed" Apr 2 00:37:54.623: INFO: Pod "downwardapi-volume-87850af3-f03c-47ad-b8c6-020787e9ac3b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.534057ms Apr 2 00:37:56.627: INFO: Pod "downwardapi-volume-87850af3-f03c-47ad-b8c6-020787e9ac3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022657848s Apr 2 00:37:58.631: INFO: Pod "downwardapi-volume-87850af3-f03c-47ad-b8c6-020787e9ac3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026932571s STEP: Saw pod success Apr 2 00:37:58.631: INFO: Pod "downwardapi-volume-87850af3-f03c-47ad-b8c6-020787e9ac3b" satisfied condition "Succeeded or Failed" Apr 2 00:37:58.635: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-87850af3-f03c-47ad-b8c6-020787e9ac3b container client-container: STEP: delete the pod Apr 2 00:37:58.651: INFO: Waiting for pod downwardapi-volume-87850af3-f03c-47ad-b8c6-020787e9ac3b to disappear Apr 2 00:37:58.655: INFO: Pod downwardapi-volume-87850af3-f03c-47ad-b8c6-020787e9ac3b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:37:58.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-483" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":208,"skipped":3560,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:37:58.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 2 00:37:58.747: INFO: Waiting up to 5m0s for pod "downwardapi-volume-23d793f1-2bde-4f41-b2d5-ff63d78a51de" in namespace "downward-api-8552" to be "Succeeded or Failed" Apr 2 00:37:58.772: INFO: Pod "downwardapi-volume-23d793f1-2bde-4f41-b2d5-ff63d78a51de": Phase="Pending", Reason="", readiness=false. Elapsed: 25.438484ms Apr 2 00:38:00.821: INFO: Pod "downwardapi-volume-23d793f1-2bde-4f41-b2d5-ff63d78a51de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073541384s Apr 2 00:38:02.824: INFO: Pod "downwardapi-volume-23d793f1-2bde-4f41-b2d5-ff63d78a51de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077146262s STEP: Saw pod success Apr 2 00:38:02.824: INFO: Pod "downwardapi-volume-23d793f1-2bde-4f41-b2d5-ff63d78a51de" satisfied condition "Succeeded or Failed" Apr 2 00:38:02.828: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-23d793f1-2bde-4f41-b2d5-ff63d78a51de container client-container: STEP: delete the pod Apr 2 00:38:02.849: INFO: Waiting for pod downwardapi-volume-23d793f1-2bde-4f41-b2d5-ff63d78a51de to disappear Apr 2 00:38:02.867: INFO: Pod downwardapi-volume-23d793f1-2bde-4f41-b2d5-ff63d78a51de no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:38:02.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8552" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":209,"skipped":3563,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:38:02.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:38:09.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6701" for this suite. • [SLOW TEST:7.050 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":210,"skipped":3576,"failed":0} SS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:38:09.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Apr 2 00:38:09.991: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the sample API server. Apr 2 00:38:10.518: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 2 00:38:12.737: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721384690, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721384690, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721384690, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721384690, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 2 00:38:15.373: INFO: Waited 624.782352ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:38:15.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-326" for this suite. • [SLOW TEST:6.095 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":211,"skipped":3578,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:38:16.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 2 00:38:16.156: INFO: Waiting up to 5m0s for pod "pod-d0fd7608-261d-4113-a6f8-13a714eaf1be" in namespace "emptydir-4727" to be "Succeeded or Failed" Apr 2 00:38:16.159: INFO: Pod "pod-d0fd7608-261d-4113-a6f8-13a714eaf1be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.683667ms Apr 2 00:38:18.163: INFO: Pod "pod-d0fd7608-261d-4113-a6f8-13a714eaf1be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006983701s Apr 2 00:38:20.167: INFO: Pod "pod-d0fd7608-261d-4113-a6f8-13a714eaf1be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010942489s STEP: Saw pod success Apr 2 00:38:20.167: INFO: Pod "pod-d0fd7608-261d-4113-a6f8-13a714eaf1be" satisfied condition "Succeeded or Failed" Apr 2 00:38:20.170: INFO: Trying to get logs from node latest-worker pod pod-d0fd7608-261d-4113-a6f8-13a714eaf1be container test-container: STEP: delete the pod Apr 2 00:38:20.206: INFO: Waiting for pod pod-d0fd7608-261d-4113-a6f8-13a714eaf1be to disappear Apr 2 00:38:20.219: INFO: Pod pod-d0fd7608-261d-4113-a6f8-13a714eaf1be no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:38:20.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4727" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":212,"skipped":3587,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:38:20.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-745a7837-03f0-410f-a8da-7ec58581128b STEP: Creating secret with name s-test-opt-upd-6128d4e4-5d0d-404e-97a1-3d169b194b12 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-745a7837-03f0-410f-a8da-7ec58581128b STEP: Updating secret s-test-opt-upd-6128d4e4-5d0d-404e-97a1-3d169b194b12 STEP: Creating secret with name s-test-opt-create-56272943-d97f-4c6b-9efa-abafda8f95c7 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:39:36.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8589" for this suite. • [SLOW TEST:76.589 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":213,"skipped":3607,"failed":0} [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:39:36.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap that has name configmap-test-emptyKey-36a694a1-fdca-4e8d-8dc8-34bd906a6e3c [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:39:36.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3306" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":214,"skipped":3607,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:39:36.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 00:39:37.019: INFO: Create a RollingUpdate DaemonSet Apr 2 00:39:37.022: INFO: Check that daemon pods launch on every node of the cluster Apr 2 00:39:37.026: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:39:37.031: INFO: Number of nodes with available pods: 0 Apr 2 00:39:37.031: INFO: Node latest-worker is running more than one daemon pod Apr 2 00:39:38.035: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:39:38.038: INFO: Number of nodes with available pods: 0 Apr 2 00:39:38.038: INFO: Node latest-worker is running more than one daemon pod Apr 2 00:39:39.101: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:39:39.150: INFO: Number of nodes with available pods: 0 Apr 2 00:39:39.150: INFO: Node latest-worker is running more than one daemon pod Apr 2 00:39:40.036: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:39:40.040: INFO: Number of nodes with available pods: 0 Apr 2 00:39:40.040: INFO: Node latest-worker is running more than one daemon pod Apr 2 00:39:41.036: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:39:41.040: INFO: Number of nodes with available pods: 2 Apr 2 00:39:41.040: INFO: Number of running nodes: 2, number of available pods: 2 Apr 2 00:39:41.040: INFO: Update the DaemonSet to trigger a rollout Apr 2 00:39:41.046: INFO: Updating DaemonSet daemon-set Apr 2 00:39:53.080: INFO: Roll back the DaemonSet before rollout is complete Apr 2 00:39:53.086: INFO: Updating DaemonSet daemon-set Apr 2 00:39:53.086: INFO: Make sure DaemonSet rollback is complete Apr 2 00:39:53.092: INFO: Wrong image for pod: daemon-set-l8sks. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 2 00:39:53.092: INFO: Pod daemon-set-l8sks is not available Apr 2 00:39:53.111: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:39:54.115: INFO: Wrong image for pod: daemon-set-l8sks. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 2 00:39:54.115: INFO: Pod daemon-set-l8sks is not available Apr 2 00:39:54.119: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:39:55.115: INFO: Wrong image for pod: daemon-set-l8sks. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 2 00:39:55.115: INFO: Pod daemon-set-l8sks is not available Apr 2 00:39:55.119: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:39:56.115: INFO: Wrong image for pod: daemon-set-l8sks. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 2 00:39:56.115: INFO: Pod daemon-set-l8sks is not available Apr 2 00:39:56.120: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:39:57.115: INFO: Wrong image for pod: daemon-set-l8sks. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 2 00:39:57.115: INFO: Pod daemon-set-l8sks is not available Apr 2 00:39:57.120: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:39:58.115: INFO: Wrong image for pod: daemon-set-l8sks. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 2 00:39:58.115: INFO: Pod daemon-set-l8sks is not available Apr 2 00:39:58.120: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:39:59.115: INFO: Wrong image for pod: daemon-set-l8sks. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 2 00:39:59.115: INFO: Pod daemon-set-l8sks is not available Apr 2 00:39:59.119: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:40:00.115: INFO: Wrong image for pod: daemon-set-l8sks. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 2 00:40:00.115: INFO: Pod daemon-set-l8sks is not available Apr 2 00:40:00.120: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:40:01.115: INFO: Wrong image for pod: daemon-set-l8sks. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 2 00:40:01.115: INFO: Pod daemon-set-l8sks is not available Apr 2 00:40:01.119: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:40:02.115: INFO: Wrong image for pod: daemon-set-l8sks. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 2 00:40:02.115: INFO: Pod daemon-set-l8sks is not available Apr 2 00:40:02.119: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 2 00:40:03.115: INFO: Pod daemon-set-w4q44 is not available Apr 2 00:40:03.119: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2469, will wait for the garbage collector to delete the pods Apr 2 00:40:03.184: INFO: Deleting DaemonSet.extensions daemon-set took: 6.06224ms Apr 2 00:40:03.485: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.215478ms Apr 2 00:40:12.792: INFO: Number of nodes with available pods: 0 Apr 2 00:40:12.792: INFO: Number of running nodes: 0, number of available pods: 0 Apr 2 00:40:12.795: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2469/daemonsets","resourceVersion":"4678547"},"items":null} Apr 2 00:40:12.797: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2469/pods","resourceVersion":"4678547"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:40:12.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2469" for this suite. • [SLOW TEST:35.864 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":215,"skipped":3615,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:40:12.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override arguments Apr 2 00:40:12.870: INFO: Waiting up to 5m0s for pod "client-containers-eee79e81-547c-45fe-80aa-9b6ed281b376" in namespace "containers-4137" to be "Succeeded or Failed" Apr 2 00:40:12.874: INFO: Pod "client-containers-eee79e81-547c-45fe-80aa-9b6ed281b376": Phase="Pending", Reason="", readiness=false. Elapsed: 3.53498ms Apr 2 00:40:14.880: INFO: Pod "client-containers-eee79e81-547c-45fe-80aa-9b6ed281b376": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009888224s Apr 2 00:40:16.885: INFO: Pod "client-containers-eee79e81-547c-45fe-80aa-9b6ed281b376": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014283144s STEP: Saw pod success Apr 2 00:40:16.885: INFO: Pod "client-containers-eee79e81-547c-45fe-80aa-9b6ed281b376" satisfied condition "Succeeded or Failed" Apr 2 00:40:16.888: INFO: Trying to get logs from node latest-worker pod client-containers-eee79e81-547c-45fe-80aa-9b6ed281b376 container test-container: STEP: delete the pod Apr 2 00:40:16.927: INFO: Waiting for pod client-containers-eee79e81-547c-45fe-80aa-9b6ed281b376 to disappear Apr 2 00:40:16.954: INFO: Pod client-containers-eee79e81-547c-45fe-80aa-9b6ed281b376 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:40:16.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4137" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":216,"skipped":3637,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:40:16.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-9daa7aa4-3b0f-45c5-8706-f8d4a082d5cb STEP: Creating a pod to test consume configMaps Apr 2 00:40:17.038: INFO: Waiting up to 5m0s for pod "pod-configmaps-1161c8ec-f4ff-4e2b-bc4d-bfb971b335ff" in namespace "configmap-8287" to be "Succeeded or Failed" Apr 2 00:40:17.042: INFO: Pod "pod-configmaps-1161c8ec-f4ff-4e2b-bc4d-bfb971b335ff": Phase="Pending", Reason="", readiness=false. Elapsed: 3.584028ms Apr 2 00:40:19.046: INFO: Pod "pod-configmaps-1161c8ec-f4ff-4e2b-bc4d-bfb971b335ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007719283s Apr 2 00:40:21.050: INFO: Pod "pod-configmaps-1161c8ec-f4ff-4e2b-bc4d-bfb971b335ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011726904s STEP: Saw pod success Apr 2 00:40:21.050: INFO: Pod "pod-configmaps-1161c8ec-f4ff-4e2b-bc4d-bfb971b335ff" satisfied condition "Succeeded or Failed" Apr 2 00:40:21.053: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-1161c8ec-f4ff-4e2b-bc4d-bfb971b335ff container configmap-volume-test: STEP: delete the pod Apr 2 00:40:21.117: INFO: Waiting for pod pod-configmaps-1161c8ec-f4ff-4e2b-bc4d-bfb971b335ff to disappear Apr 2 00:40:21.120: INFO: Pod pod-configmaps-1161c8ec-f4ff-4e2b-bc4d-bfb971b335ff no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:40:21.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8287" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":217,"skipped":3661,"failed":0} ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:40:21.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name projected-secret-test-56b17699-7b38-4984-ba29-854eca122211 STEP: Creating a pod to test consume secrets Apr 2 00:40:21.214: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-44e101ce-554b-40d8-8236-d5f2bed8d33c" in namespace "projected-7706" to be "Succeeded or Failed" Apr 2 00:40:21.235: INFO: Pod "pod-projected-secrets-44e101ce-554b-40d8-8236-d5f2bed8d33c": Phase="Pending", Reason="", readiness=false. Elapsed: 21.157131ms Apr 2 00:40:23.242: INFO: Pod "pod-projected-secrets-44e101ce-554b-40d8-8236-d5f2bed8d33c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028294927s Apr 2 00:40:25.247: INFO: Pod "pod-projected-secrets-44e101ce-554b-40d8-8236-d5f2bed8d33c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033239609s STEP: Saw pod success Apr 2 00:40:25.247: INFO: Pod "pod-projected-secrets-44e101ce-554b-40d8-8236-d5f2bed8d33c" satisfied condition "Succeeded or Failed" Apr 2 00:40:25.250: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-44e101ce-554b-40d8-8236-d5f2bed8d33c container secret-volume-test: STEP: delete the pod Apr 2 00:40:25.267: INFO: Waiting for pod pod-projected-secrets-44e101ce-554b-40d8-8236-d5f2bed8d33c to disappear Apr 2 00:40:25.283: INFO: Pod pod-projected-secrets-44e101ce-554b-40d8-8236-d5f2bed8d33c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:40:25.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7706" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":218,"skipped":3661,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:40:25.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 2 00:40:25.333: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e5f83377-ba6a-481f-b739-9fc7fb2e127d" in namespace "downward-api-2644" to be "Succeeded or Failed" Apr 2 00:40:25.346: INFO: Pod "downwardapi-volume-e5f83377-ba6a-481f-b739-9fc7fb2e127d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.194272ms Apr 2 00:40:27.351: INFO: Pod "downwardapi-volume-e5f83377-ba6a-481f-b739-9fc7fb2e127d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017595752s Apr 2 00:40:29.355: INFO: Pod "downwardapi-volume-e5f83377-ba6a-481f-b739-9fc7fb2e127d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02205022s STEP: Saw pod success Apr 2 00:40:29.355: INFO: Pod "downwardapi-volume-e5f83377-ba6a-481f-b739-9fc7fb2e127d" satisfied condition "Succeeded or Failed" Apr 2 00:40:29.358: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-e5f83377-ba6a-481f-b739-9fc7fb2e127d container client-container: STEP: delete the pod Apr 2 00:40:29.376: INFO: Waiting for pod downwardapi-volume-e5f83377-ba6a-481f-b739-9fc7fb2e127d to disappear Apr 2 00:40:29.391: INFO: Pod downwardapi-volume-e5f83377-ba6a-481f-b739-9fc7fb2e127d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:40:29.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2644" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":219,"skipped":3666,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:40:29.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 2 00:40:29.460: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d9a10799-515f-4ff5-9e8d-2b65e3c7aeb1" in namespace "downward-api-7701" to be "Succeeded or Failed" Apr 2 00:40:29.476: INFO: Pod "downwardapi-volume-d9a10799-515f-4ff5-9e8d-2b65e3c7aeb1": Phase="Pending", Reason="", readiness=false. Elapsed: 15.901371ms Apr 2 00:40:31.479: INFO: Pod "downwardapi-volume-d9a10799-515f-4ff5-9e8d-2b65e3c7aeb1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019481914s Apr 2 00:40:33.484: INFO: Pod "downwardapi-volume-d9a10799-515f-4ff5-9e8d-2b65e3c7aeb1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024094354s STEP: Saw pod success Apr 2 00:40:33.484: INFO: Pod "downwardapi-volume-d9a10799-515f-4ff5-9e8d-2b65e3c7aeb1" satisfied condition "Succeeded or Failed" Apr 2 00:40:33.487: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-d9a10799-515f-4ff5-9e8d-2b65e3c7aeb1 container client-container: STEP: delete the pod Apr 2 00:40:33.520: INFO: Waiting for pod downwardapi-volume-d9a10799-515f-4ff5-9e8d-2b65e3c7aeb1 to disappear Apr 2 00:40:33.534: INFO: Pod downwardapi-volume-d9a10799-515f-4ff5-9e8d-2b65e3c7aeb1 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:40:33.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7701" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":220,"skipped":3667,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:40:33.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Apr 2 00:40:33.616: INFO: >>> kubeConfig: /root/.kube/config Apr 2 00:40:36.523: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:40:46.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4680" for this suite. • [SLOW TEST:12.523 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":221,"skipped":3673,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:40:46.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:40:57.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1401" for this suite. • [SLOW TEST:11.184 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":222,"skipped":3683,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:40:57.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:41:01.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6320" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":223,"skipped":3701,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:41:01.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 2 00:41:01.478: INFO: Waiting up to 5m0s for pod "downward-api-59b5b088-43aa-4684-a5c8-01c923d66c80" in namespace "downward-api-9747" to be "Succeeded or Failed" Apr 2 00:41:01.493: INFO: Pod "downward-api-59b5b088-43aa-4684-a5c8-01c923d66c80": Phase="Pending", Reason="", readiness=false. Elapsed: 14.5293ms Apr 2 00:41:03.497: INFO: Pod "downward-api-59b5b088-43aa-4684-a5c8-01c923d66c80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018883114s Apr 2 00:41:05.502: INFO: Pod "downward-api-59b5b088-43aa-4684-a5c8-01c923d66c80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024049453s STEP: Saw pod success Apr 2 00:41:05.502: INFO: Pod "downward-api-59b5b088-43aa-4684-a5c8-01c923d66c80" satisfied condition "Succeeded or Failed" Apr 2 00:41:05.507: INFO: Trying to get logs from node latest-worker pod downward-api-59b5b088-43aa-4684-a5c8-01c923d66c80 container dapi-container: STEP: delete the pod Apr 2 00:41:05.549: INFO: Waiting for pod downward-api-59b5b088-43aa-4684-a5c8-01c923d66c80 to disappear Apr 2 00:41:05.558: INFO: Pod downward-api-59b5b088-43aa-4684-a5c8-01c923d66c80 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:41:05.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9747" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":224,"skipped":3750,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:41:05.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Apr 2 00:41:05.650: INFO: namespace kubectl-8207 Apr 2 00:41:05.650: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8207' Apr 2 00:41:08.406: INFO: stderr: "" Apr 2 00:41:08.406: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 2 00:41:09.416: INFO: Selector matched 1 pods for map[app:agnhost] Apr 2 00:41:09.416: INFO: Found 0 / 1 Apr 2 00:41:10.411: INFO: Selector matched 1 pods for map[app:agnhost] Apr 2 00:41:10.411: INFO: Found 0 / 1 Apr 2 00:41:11.410: INFO: Selector matched 1 pods for map[app:agnhost] Apr 2 00:41:11.410: INFO: Found 1 / 1 Apr 2 00:41:11.410: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 2 00:41:11.414: INFO: Selector matched 1 pods for map[app:agnhost] Apr 2 00:41:11.414: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 2 00:41:11.414: INFO: wait on agnhost-master startup in kubectl-8207 Apr 2 00:41:11.414: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs agnhost-master-7fcbv agnhost-master --namespace=kubectl-8207' Apr 2 00:41:11.532: INFO: stderr: "" Apr 2 00:41:11.532: INFO: stdout: "Paused\n" STEP: exposing RC Apr 2 00:41:11.532: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8207' Apr 2 00:41:11.651: INFO: stderr: "" Apr 2 00:41:11.651: INFO: stdout: "service/rm2 exposed\n" Apr 2 00:41:11.660: INFO: Service rm2 in namespace kubectl-8207 found. STEP: exposing service Apr 2 00:41:13.668: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8207' Apr 2 00:41:13.829: INFO: stderr: "" Apr 2 00:41:13.829: INFO: stdout: "service/rm3 exposed\n" Apr 2 00:41:13.832: INFO: Service rm3 in namespace kubectl-8207 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:41:15.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8207" for this suite. • [SLOW TEST:10.281 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":275,"completed":225,"skipped":3769,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:41:15.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Apr 2 00:41:15.890: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6298' Apr 2 00:41:16.107: INFO: stderr: "" Apr 2 00:41:16.107: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 2 00:41:17.111: INFO: Selector matched 1 pods for map[app:agnhost] Apr 2 00:41:17.111: INFO: Found 0 / 1 Apr 2 00:41:18.110: INFO: Selector matched 1 pods for map[app:agnhost] Apr 2 00:41:18.111: INFO: Found 0 / 1 Apr 2 00:41:19.111: INFO: Selector matched 1 pods for map[app:agnhost] Apr 2 00:41:19.111: INFO: Found 0 / 1 Apr 2 00:41:20.112: INFO: Selector matched 1 pods for map[app:agnhost] Apr 2 00:41:20.112: INFO: Found 1 / 1 Apr 2 00:41:20.112: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Apr 2 00:41:20.115: INFO: Selector matched 1 pods for map[app:agnhost] Apr 2 00:41:20.115: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 2 00:41:20.115: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config patch pod agnhost-master-7l6h4 --namespace=kubectl-6298 -p {"metadata":{"annotations":{"x":"y"}}}' Apr 2 00:41:20.212: INFO: stderr: "" Apr 2 00:41:20.212: INFO: stdout: "pod/agnhost-master-7l6h4 patched\n" STEP: checking annotations Apr 2 00:41:20.236: INFO: Selector matched 1 pods for map[app:agnhost] Apr 2 00:41:20.236: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:41:20.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6298" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":275,"completed":226,"skipped":3834,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:41:20.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 00:41:20.290: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 2 00:41:23.175: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9526 create -f -' Apr 2 00:41:26.815: INFO: stderr: "" Apr 2 00:41:26.815: INFO: stdout: "e2e-test-crd-publish-openapi-539-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 2 00:41:26.815: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9526 delete e2e-test-crd-publish-openapi-539-crds test-cr' Apr 2 00:41:27.028: INFO: stderr: "" Apr 2 00:41:27.028: INFO: stdout: "e2e-test-crd-publish-openapi-539-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Apr 2 00:41:27.028: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9526 apply -f -' Apr 2 00:41:27.263: INFO: stderr: "" Apr 2 00:41:27.263: INFO: stdout: "e2e-test-crd-publish-openapi-539-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 2 00:41:27.263: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9526 delete e2e-test-crd-publish-openapi-539-crds test-cr' Apr 2 00:41:27.491: INFO: stderr: "" Apr 2 00:41:27.491: INFO: stdout: "e2e-test-crd-publish-openapi-539-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Apr 2 00:41:27.491: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-539-crds' Apr 2 00:41:27.772: INFO: stderr: "" Apr 2 00:41:27.772: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-539-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:41:30.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9526" for this suite. • [SLOW TEST:10.364 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":227,"skipped":3849,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:41:30.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-a96dedf5-32b0-4d6a-8ad8-ad5f5c491c9f STEP: Creating a pod to test consume secrets Apr 2 00:41:30.690: INFO: Waiting up to 5m0s for pod "pod-secrets-f59e5398-efff-47b7-b9c9-ba6f5474ce14" in namespace "secrets-4207" to be "Succeeded or Failed" Apr 2 00:41:30.714: INFO: Pod "pod-secrets-f59e5398-efff-47b7-b9c9-ba6f5474ce14": Phase="Pending", Reason="", readiness=false. Elapsed: 24.354906ms Apr 2 00:41:32.719: INFO: Pod "pod-secrets-f59e5398-efff-47b7-b9c9-ba6f5474ce14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02878354s Apr 2 00:41:34.723: INFO: Pod "pod-secrets-f59e5398-efff-47b7-b9c9-ba6f5474ce14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033365919s STEP: Saw pod success Apr 2 00:41:34.723: INFO: Pod "pod-secrets-f59e5398-efff-47b7-b9c9-ba6f5474ce14" satisfied condition "Succeeded or Failed" Apr 2 00:41:34.727: INFO: Trying to get logs from node latest-worker pod pod-secrets-f59e5398-efff-47b7-b9c9-ba6f5474ce14 container secret-volume-test: STEP: delete the pod Apr 2 00:41:34.764: INFO: Waiting for pod pod-secrets-f59e5398-efff-47b7-b9c9-ba6f5474ce14 to disappear Apr 2 00:41:34.770: INFO: Pod pod-secrets-f59e5398-efff-47b7-b9c9-ba6f5474ce14 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:41:34.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4207" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":228,"skipped":3889,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:41:34.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-2248caa4-eb31-4745-a472-80cac8cbcba1 STEP: Creating a pod to test consume secrets Apr 2 00:41:34.860: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-28a7a6f3-1e6e-4daf-8e5b-cf612fbb6338" in namespace "projected-9588" to be "Succeeded or Failed" Apr 2 00:41:34.866: INFO: Pod "pod-projected-secrets-28a7a6f3-1e6e-4daf-8e5b-cf612fbb6338": Phase="Pending", Reason="", readiness=false. Elapsed: 5.902854ms Apr 2 00:41:36.869: INFO: Pod "pod-projected-secrets-28a7a6f3-1e6e-4daf-8e5b-cf612fbb6338": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009001151s Apr 2 00:41:38.874: INFO: Pod "pod-projected-secrets-28a7a6f3-1e6e-4daf-8e5b-cf612fbb6338": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01324497s STEP: Saw pod success Apr 2 00:41:38.874: INFO: Pod "pod-projected-secrets-28a7a6f3-1e6e-4daf-8e5b-cf612fbb6338" satisfied condition "Succeeded or Failed" Apr 2 00:41:38.877: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-28a7a6f3-1e6e-4daf-8e5b-cf612fbb6338 container projected-secret-volume-test: STEP: delete the pod Apr 2 00:41:38.897: INFO: Waiting for pod pod-projected-secrets-28a7a6f3-1e6e-4daf-8e5b-cf612fbb6338 to disappear Apr 2 00:41:38.903: INFO: Pod pod-projected-secrets-28a7a6f3-1e6e-4daf-8e5b-cf612fbb6338 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:41:38.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9588" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":229,"skipped":3907,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:41:38.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-396eed2e-3dbc-40d8-a64f-6b41d73225bf STEP: Creating a pod to test consume configMaps Apr 2 00:41:39.024: INFO: Waiting up to 5m0s for pod "pod-configmaps-e3cb9a45-36ad-477e-9e1b-95be447408fc" in namespace "configmap-5438" to be "Succeeded or Failed" Apr 2 00:41:39.028: INFO: Pod "pod-configmaps-e3cb9a45-36ad-477e-9e1b-95be447408fc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.577375ms Apr 2 00:41:41.129: INFO: Pod "pod-configmaps-e3cb9a45-36ad-477e-9e1b-95be447408fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105307087s Apr 2 00:41:43.134: INFO: Pod "pod-configmaps-e3cb9a45-36ad-477e-9e1b-95be447408fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.1096719s STEP: Saw pod success Apr 2 00:41:43.134: INFO: Pod "pod-configmaps-e3cb9a45-36ad-477e-9e1b-95be447408fc" satisfied condition "Succeeded or Failed" Apr 2 00:41:43.137: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-e3cb9a45-36ad-477e-9e1b-95be447408fc container configmap-volume-test: STEP: delete the pod Apr 2 00:41:43.201: INFO: Waiting for pod pod-configmaps-e3cb9a45-36ad-477e-9e1b-95be447408fc to disappear Apr 2 00:41:43.204: INFO: Pod pod-configmaps-e3cb9a45-36ad-477e-9e1b-95be447408fc no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:41:43.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5438" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":230,"skipped":3922,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:41:43.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-9f2667a5-c93f-42f5-9ba0-55ab268c8125 STEP: Creating a pod to test consume secrets Apr 2 00:41:43.269: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e368fb62-5308-4aa1-96fb-17e9f1ee5490" in namespace "projected-4757" to be "Succeeded or Failed" Apr 2 00:41:43.272: INFO: Pod "pod-projected-secrets-e368fb62-5308-4aa1-96fb-17e9f1ee5490": Phase="Pending", Reason="", readiness=false. Elapsed: 3.361255ms Apr 2 00:41:45.275: INFO: Pod "pod-projected-secrets-e368fb62-5308-4aa1-96fb-17e9f1ee5490": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006470061s Apr 2 00:41:47.285: INFO: Pod "pod-projected-secrets-e368fb62-5308-4aa1-96fb-17e9f1ee5490": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01623972s STEP: Saw pod success Apr 2 00:41:47.285: INFO: Pod "pod-projected-secrets-e368fb62-5308-4aa1-96fb-17e9f1ee5490" satisfied condition "Succeeded or Failed" Apr 2 00:41:47.290: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-e368fb62-5308-4aa1-96fb-17e9f1ee5490 container projected-secret-volume-test: STEP: delete the pod Apr 2 00:41:47.330: INFO: Waiting for pod pod-projected-secrets-e368fb62-5308-4aa1-96fb-17e9f1ee5490 to disappear Apr 2 00:41:47.356: INFO: Pod pod-projected-secrets-e368fb62-5308-4aa1-96fb-17e9f1ee5490 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:41:47.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4757" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":231,"skipped":3989,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:41:47.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:42:03.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4654" for this suite. • [SLOW TEST:16.119 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":232,"skipped":4006,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:42:03.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3495.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3495.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 2 00:42:09.639: INFO: DNS probes using dns-3495/dns-test-18242f6d-c586-442b-9fd1-c8434a2e5d72 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:42:09.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3495" for this suite. • [SLOW TEST:6.238 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":275,"completed":233,"skipped":4013,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:42:09.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-586e1d29-d3cd-4c1e-aa7f-b24dd014d190 STEP: Creating a pod to test consume configMaps Apr 2 00:42:09.778: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d094b458-9a75-4e27-ae78-680a91385f71" in namespace "projected-3972" to be "Succeeded or Failed" Apr 2 00:42:09.792: INFO: Pod "pod-projected-configmaps-d094b458-9a75-4e27-ae78-680a91385f71": Phase="Pending", Reason="", readiness=false. Elapsed: 14.317764ms Apr 2 00:42:11.796: INFO: Pod "pod-projected-configmaps-d094b458-9a75-4e27-ae78-680a91385f71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017886919s Apr 2 00:42:13.800: INFO: Pod "pod-projected-configmaps-d094b458-9a75-4e27-ae78-680a91385f71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022236834s STEP: Saw pod success Apr 2 00:42:13.800: INFO: Pod "pod-projected-configmaps-d094b458-9a75-4e27-ae78-680a91385f71" satisfied condition "Succeeded or Failed" Apr 2 00:42:13.804: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-d094b458-9a75-4e27-ae78-680a91385f71 container projected-configmap-volume-test: STEP: delete the pod Apr 2 00:42:13.836: INFO: Waiting for pod pod-projected-configmaps-d094b458-9a75-4e27-ae78-680a91385f71 to disappear Apr 2 00:42:13.848: INFO: Pod pod-projected-configmaps-d094b458-9a75-4e27-ae78-680a91385f71 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:42:13.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3972" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":234,"skipped":4013,"failed":0} ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:42:13.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 00:42:13.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Apr 2 00:42:14.500: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-02T00:42:14Z generation:1 name:name1 resourceVersion:4679412 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:d71c1660-6aa3-4e15-90e4-16b98cc670f0] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Apr 2 00:42:24.504: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-02T00:42:24Z generation:1 name:name2 resourceVersion:4679467 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:10a6a4e4-c51d-44e3-96e7-3b716956db6c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Apr 2 00:42:34.510: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-02T00:42:14Z generation:2 name:name1 resourceVersion:4679495 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:d71c1660-6aa3-4e15-90e4-16b98cc670f0] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Apr 2 00:42:44.516: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-02T00:42:24Z generation:2 name:name2 resourceVersion:4679525 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:10a6a4e4-c51d-44e3-96e7-3b716956db6c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Apr 2 00:42:54.524: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-02T00:42:14Z generation:2 name:name1 resourceVersion:4679555 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:d71c1660-6aa3-4e15-90e4-16b98cc670f0] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Apr 2 00:43:04.532: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-02T00:42:24Z generation:2 name:name2 resourceVersion:4679585 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:10a6a4e4-c51d-44e3-96e7-3b716956db6c] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:43:15.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-4443" for this suite. • [SLOW TEST:61.235 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":235,"skipped":4013,"failed":0} SSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:43:15.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override command Apr 2 00:43:15.168: INFO: Waiting up to 5m0s for pod "client-containers-a9446a85-5eee-4c98-8de6-2ad717c37a69" in namespace "containers-8177" to be "Succeeded or Failed" Apr 2 00:43:15.173: INFO: Pod "client-containers-a9446a85-5eee-4c98-8de6-2ad717c37a69": Phase="Pending", Reason="", readiness=false. Elapsed: 4.688165ms Apr 2 00:43:17.176: INFO: Pod "client-containers-a9446a85-5eee-4c98-8de6-2ad717c37a69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007871397s Apr 2 00:43:19.179: INFO: Pod "client-containers-a9446a85-5eee-4c98-8de6-2ad717c37a69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011348211s STEP: Saw pod success Apr 2 00:43:19.180: INFO: Pod "client-containers-a9446a85-5eee-4c98-8de6-2ad717c37a69" satisfied condition "Succeeded or Failed" Apr 2 00:43:19.182: INFO: Trying to get logs from node latest-worker pod client-containers-a9446a85-5eee-4c98-8de6-2ad717c37a69 container test-container: STEP: delete the pod Apr 2 00:43:19.222: INFO: Waiting for pod client-containers-a9446a85-5eee-4c98-8de6-2ad717c37a69 to disappear Apr 2 00:43:19.256: INFO: Pod client-containers-a9446a85-5eee-4c98-8de6-2ad717c37a69 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:43:19.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8177" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":236,"skipped":4017,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:43:19.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 00:43:19.317: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 2 00:43:24.324: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 2 00:43:24.324: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 2 00:43:28.378: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-8597 /apis/apps/v1/namespaces/deployment-8597/deployments/test-cleanup-deployment 13cb38dc-11a2-4844-a4d7-d407a1aa2a1a 4679728 1 2020-04-02 00:43:24 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004cd0ba8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-02 00:43:24 +0000 UTC,LastTransitionTime:2020-04-02 00:43:24 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-577c77b589" has successfully progressed.,LastUpdateTime:2020-04-02 00:43:26 +0000 UTC,LastTransitionTime:2020-04-02 00:43:24 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 2 00:43:28.382: INFO: New ReplicaSet "test-cleanup-deployment-577c77b589" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-577c77b589 deployment-8597 /apis/apps/v1/namespaces/deployment-8597/replicasets/test-cleanup-deployment-577c77b589 00aaaeaf-d0b2-484b-bd1e-576c8d8cf027 4679716 1 2020-04-02 00:43:24 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 13cb38dc-11a2-4844-a4d7-d407a1aa2a1a 0xc004cd11c7 0xc004cd11c8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 577c77b589,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004cd1258 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 2 00:43:28.385: INFO: Pod "test-cleanup-deployment-577c77b589-2vssh" is available: &Pod{ObjectMeta:{test-cleanup-deployment-577c77b589-2vssh test-cleanup-deployment-577c77b589- deployment-8597 /api/v1/namespaces/deployment-8597/pods/test-cleanup-deployment-577c77b589-2vssh dfd4c0df-f570-4577-a3fe-5470e81af769 4679714 0 2020-04-02 00:43:24 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-577c77b589 00aaaeaf-d0b2-484b-bd1e-576c8d8cf027 0xc004cac537 0xc004cac538}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bb7l4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bb7l4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bb7l4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:43:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:43:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:43:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-02 00:43:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.71,StartTime:2020-04-02 00:43:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-02 00:43:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://8d4a127b0eb9335aba82fe284e036bcbde6af77079b5a34b4f17be23e90ffdf4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.71,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:43:28.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8597" for this suite. • [SLOW TEST:9.129 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":237,"skipped":4025,"failed":0} [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:43:28.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's args Apr 2 00:43:28.505: INFO: Waiting up to 5m0s for pod "var-expansion-ab2798d5-823e-4b4c-87e4-7a6446f3993f" in namespace "var-expansion-1281" to be "Succeeded or Failed" Apr 2 00:43:28.520: INFO: Pod "var-expansion-ab2798d5-823e-4b4c-87e4-7a6446f3993f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.854116ms Apr 2 00:43:30.524: INFO: Pod "var-expansion-ab2798d5-823e-4b4c-87e4-7a6446f3993f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018643189s Apr 2 00:43:32.528: INFO: Pod "var-expansion-ab2798d5-823e-4b4c-87e4-7a6446f3993f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02332145s STEP: Saw pod success Apr 2 00:43:32.528: INFO: Pod "var-expansion-ab2798d5-823e-4b4c-87e4-7a6446f3993f" satisfied condition "Succeeded or Failed" Apr 2 00:43:32.532: INFO: Trying to get logs from node latest-worker pod var-expansion-ab2798d5-823e-4b4c-87e4-7a6446f3993f container dapi-container: STEP: delete the pod Apr 2 00:43:32.553: INFO: Waiting for pod var-expansion-ab2798d5-823e-4b4c-87e4-7a6446f3993f to disappear Apr 2 00:43:32.556: INFO: Pod var-expansion-ab2798d5-823e-4b4c-87e4-7a6446f3993f no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:43:32.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1281" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":238,"skipped":4025,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:43:32.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 2 00:43:36.652: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:43:36.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-990" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":239,"skipped":4033,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:43:36.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-2656 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Apr 2 00:43:36.778: INFO: Found 0 stateful pods, waiting for 3 Apr 2 00:43:46.784: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 2 00:43:46.785: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 2 00:43:46.785: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 2 00:43:46.795: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2656 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 2 00:43:47.058: INFO: stderr: "I0402 00:43:46.931607 3009 log.go:172] (0xc0009d80b0) (0xc000934320) Create stream\nI0402 00:43:46.931678 3009 log.go:172] (0xc0009d80b0) (0xc000934320) Stream added, broadcasting: 1\nI0402 00:43:46.934369 3009 log.go:172] (0xc0009d80b0) Reply frame received for 1\nI0402 00:43:46.934432 3009 log.go:172] (0xc0009d80b0) (0xc0005bb680) Create stream\nI0402 00:43:46.934461 3009 log.go:172] (0xc0009d80b0) (0xc0005bb680) Stream added, broadcasting: 3\nI0402 00:43:46.935346 3009 log.go:172] (0xc0009d80b0) Reply frame received for 3\nI0402 00:43:46.935377 3009 log.go:172] (0xc0009d80b0) (0xc0009343c0) Create stream\nI0402 00:43:46.935387 3009 log.go:172] (0xc0009d80b0) (0xc0009343c0) Stream added, broadcasting: 5\nI0402 00:43:46.936369 3009 log.go:172] (0xc0009d80b0) Reply frame received for 5\nI0402 00:43:47.022387 3009 log.go:172] (0xc0009d80b0) Data frame received for 5\nI0402 00:43:47.022420 3009 log.go:172] (0xc0009343c0) (5) Data frame handling\nI0402 00:43:47.022441 3009 log.go:172] (0xc0009343c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0402 00:43:47.050963 3009 log.go:172] (0xc0009d80b0) Data frame received for 3\nI0402 00:43:47.051005 3009 log.go:172] (0xc0005bb680) (3) Data frame handling\nI0402 00:43:47.051040 3009 log.go:172] (0xc0005bb680) (3) Data frame sent\nI0402 00:43:47.051066 3009 log.go:172] (0xc0009d80b0) Data frame received for 3\nI0402 00:43:47.051082 3009 log.go:172] (0xc0005bb680) (3) Data frame handling\nI0402 00:43:47.051518 3009 log.go:172] (0xc0009d80b0) Data frame received for 5\nI0402 00:43:47.051631 3009 log.go:172] (0xc0009343c0) (5) Data frame handling\nI0402 00:43:47.053357 3009 log.go:172] (0xc0009d80b0) Data frame received for 1\nI0402 00:43:47.053527 3009 log.go:172] (0xc000934320) (1) Data frame handling\nI0402 00:43:47.053632 3009 log.go:172] (0xc000934320) (1) Data frame sent\nI0402 00:43:47.053673 3009 log.go:172] (0xc0009d80b0) (0xc000934320) Stream removed, broadcasting: 1\nI0402 00:43:47.053712 3009 log.go:172] (0xc0009d80b0) Go away received\nI0402 00:43:47.054274 3009 log.go:172] (0xc0009d80b0) (0xc000934320) Stream removed, broadcasting: 1\nI0402 00:43:47.054324 3009 log.go:172] (0xc0009d80b0) (0xc0005bb680) Stream removed, broadcasting: 3\nI0402 00:43:47.054348 3009 log.go:172] (0xc0009d80b0) (0xc0009343c0) Stream removed, broadcasting: 5\n" Apr 2 00:43:47.058: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 2 00:43:47.058: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 2 00:43:57.088: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Apr 2 00:44:07.181: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2656 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 2 00:44:07.418: INFO: stderr: "I0402 00:44:07.318549 3029 log.go:172] (0xc000a4e0b0) (0xc00080f540) Create stream\nI0402 00:44:07.318616 3029 log.go:172] (0xc000a4e0b0) (0xc00080f540) Stream added, broadcasting: 1\nI0402 00:44:07.327808 3029 log.go:172] (0xc000a4e0b0) Reply frame received for 1\nI0402 00:44:07.327863 3029 log.go:172] (0xc000a4e0b0) (0xc0009e6000) Create stream\nI0402 00:44:07.327878 3029 log.go:172] (0xc000a4e0b0) (0xc0009e6000) Stream added, broadcasting: 3\nI0402 00:44:07.329060 3029 log.go:172] (0xc000a4e0b0) Reply frame received for 3\nI0402 00:44:07.329227 3029 log.go:172] (0xc000a4e0b0) (0xc0004bc000) Create stream\nI0402 00:44:07.329252 3029 log.go:172] (0xc000a4e0b0) (0xc0004bc000) Stream added, broadcasting: 5\nI0402 00:44:07.330320 3029 log.go:172] (0xc000a4e0b0) Reply frame received for 5\nI0402 00:44:07.412171 3029 log.go:172] (0xc000a4e0b0) Data frame received for 3\nI0402 00:44:07.412195 3029 log.go:172] (0xc0009e6000) (3) Data frame handling\nI0402 00:44:07.412203 3029 log.go:172] (0xc0009e6000) (3) Data frame sent\nI0402 00:44:07.412212 3029 log.go:172] (0xc000a4e0b0) Data frame received for 3\nI0402 00:44:07.412220 3029 log.go:172] (0xc0009e6000) (3) Data frame handling\nI0402 00:44:07.412259 3029 log.go:172] (0xc000a4e0b0) Data frame received for 5\nI0402 00:44:07.412286 3029 log.go:172] (0xc0004bc000) (5) Data frame handling\nI0402 00:44:07.412319 3029 log.go:172] (0xc0004bc000) (5) Data frame sent\nI0402 00:44:07.412334 3029 log.go:172] (0xc000a4e0b0) Data frame received for 5\nI0402 00:44:07.412352 3029 log.go:172] (0xc0004bc000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0402 00:44:07.414597 3029 log.go:172] (0xc000a4e0b0) Data frame received for 1\nI0402 00:44:07.414618 3029 log.go:172] (0xc00080f540) (1) Data frame handling\nI0402 00:44:07.414630 3029 log.go:172] (0xc00080f540) (1) Data frame sent\nI0402 00:44:07.414643 3029 log.go:172] (0xc000a4e0b0) (0xc00080f540) Stream removed, broadcasting: 1\nI0402 00:44:07.414897 3029 log.go:172] (0xc000a4e0b0) Go away received\nI0402 00:44:07.414988 3029 log.go:172] (0xc000a4e0b0) (0xc00080f540) Stream removed, broadcasting: 1\nI0402 00:44:07.415010 3029 log.go:172] (0xc000a4e0b0) (0xc0009e6000) Stream removed, broadcasting: 3\nI0402 00:44:07.415020 3029 log.go:172] (0xc000a4e0b0) (0xc0004bc000) Stream removed, broadcasting: 5\n" Apr 2 00:44:07.418: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 2 00:44:07.418: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 2 00:44:27.435: INFO: Waiting for StatefulSet statefulset-2656/ss2 to complete update Apr 2 00:44:27.435: INFO: Waiting for Pod statefulset-2656/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Apr 2 00:44:37.444: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2656 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 2 00:44:37.693: INFO: stderr: "I0402 00:44:37.575733 3051 log.go:172] (0xc000af0c60) (0xc000a4e780) Create stream\nI0402 00:44:37.575801 3051 log.go:172] (0xc000af0c60) (0xc000a4e780) Stream added, broadcasting: 1\nI0402 00:44:37.578944 3051 log.go:172] (0xc000af0c60) Reply frame received for 1\nI0402 00:44:37.578996 3051 log.go:172] (0xc000af0c60) (0xc000ae0320) Create stream\nI0402 00:44:37.579010 3051 log.go:172] (0xc000af0c60) (0xc000ae0320) Stream added, broadcasting: 3\nI0402 00:44:37.580054 3051 log.go:172] (0xc000af0c60) Reply frame received for 3\nI0402 00:44:37.580075 3051 log.go:172] (0xc000af0c60) (0xc000a4e820) Create stream\nI0402 00:44:37.580083 3051 log.go:172] (0xc000af0c60) (0xc000a4e820) Stream added, broadcasting: 5\nI0402 00:44:37.581045 3051 log.go:172] (0xc000af0c60) Reply frame received for 5\nI0402 00:44:37.651068 3051 log.go:172] (0xc000af0c60) Data frame received for 5\nI0402 00:44:37.651098 3051 log.go:172] (0xc000a4e820) (5) Data frame handling\nI0402 00:44:37.651119 3051 log.go:172] (0xc000a4e820) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0402 00:44:37.684794 3051 log.go:172] (0xc000af0c60) Data frame received for 3\nI0402 00:44:37.684832 3051 log.go:172] (0xc000ae0320) (3) Data frame handling\nI0402 00:44:37.684873 3051 log.go:172] (0xc000ae0320) (3) Data frame sent\nI0402 00:44:37.684987 3051 log.go:172] (0xc000af0c60) Data frame received for 5\nI0402 00:44:37.685002 3051 log.go:172] (0xc000a4e820) (5) Data frame handling\nI0402 00:44:37.685437 3051 log.go:172] (0xc000af0c60) Data frame received for 3\nI0402 00:44:37.685472 3051 log.go:172] (0xc000ae0320) (3) Data frame handling\nI0402 00:44:37.687119 3051 log.go:172] (0xc000af0c60) Data frame received for 1\nI0402 00:44:37.687157 3051 log.go:172] (0xc000a4e780) (1) Data frame handling\nI0402 00:44:37.687179 3051 log.go:172] (0xc000a4e780) (1) Data frame sent\nI0402 00:44:37.687201 3051 log.go:172] (0xc000af0c60) (0xc000a4e780) Stream removed, broadcasting: 1\nI0402 00:44:37.687302 3051 log.go:172] (0xc000af0c60) Go away received\nI0402 00:44:37.688664 3051 log.go:172] (0xc000af0c60) (0xc000a4e780) Stream removed, broadcasting: 1\nI0402 00:44:37.688725 3051 log.go:172] (0xc000af0c60) (0xc000ae0320) Stream removed, broadcasting: 3\nI0402 00:44:37.688757 3051 log.go:172] (0xc000af0c60) (0xc000a4e820) Stream removed, broadcasting: 5\n" Apr 2 00:44:37.693: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 2 00:44:37.693: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 2 00:44:47.747: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Apr 2 00:44:57.768: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2656 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 2 00:44:57.979: INFO: stderr: "I0402 00:44:57.908503 3072 log.go:172] (0xc0009422c0) (0xc0006cf540) Create stream\nI0402 00:44:57.908587 3072 log.go:172] (0xc0009422c0) (0xc0006cf540) Stream added, broadcasting: 1\nI0402 00:44:57.911560 3072 log.go:172] (0xc0009422c0) Reply frame received for 1\nI0402 00:44:57.911615 3072 log.go:172] (0xc0009422c0) (0xc000a66000) Create stream\nI0402 00:44:57.911636 3072 log.go:172] (0xc0009422c0) (0xc000a66000) Stream added, broadcasting: 3\nI0402 00:44:57.913075 3072 log.go:172] (0xc0009422c0) Reply frame received for 3\nI0402 00:44:57.913281 3072 log.go:172] (0xc0009422c0) (0xc000a660a0) Create stream\nI0402 00:44:57.913306 3072 log.go:172] (0xc0009422c0) (0xc000a660a0) Stream added, broadcasting: 5\nI0402 00:44:57.914492 3072 log.go:172] (0xc0009422c0) Reply frame received for 5\nI0402 00:44:57.973951 3072 log.go:172] (0xc0009422c0) Data frame received for 5\nI0402 00:44:57.973985 3072 log.go:172] (0xc000a660a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0402 00:44:57.974014 3072 log.go:172] (0xc0009422c0) Data frame received for 3\nI0402 00:44:57.974047 3072 log.go:172] (0xc000a66000) (3) Data frame handling\nI0402 00:44:57.974077 3072 log.go:172] (0xc000a660a0) (5) Data frame sent\nI0402 00:44:57.974126 3072 log.go:172] (0xc0009422c0) Data frame received for 5\nI0402 00:44:57.974145 3072 log.go:172] (0xc000a660a0) (5) Data frame handling\nI0402 00:44:57.974173 3072 log.go:172] (0xc000a66000) (3) Data frame sent\nI0402 00:44:57.974190 3072 log.go:172] (0xc0009422c0) Data frame received for 3\nI0402 00:44:57.974202 3072 log.go:172] (0xc000a66000) (3) Data frame handling\nI0402 00:44:57.975102 3072 log.go:172] (0xc0009422c0) Data frame received for 1\nI0402 00:44:57.975123 3072 log.go:172] (0xc0006cf540) (1) Data frame handling\nI0402 00:44:57.975135 3072 log.go:172] (0xc0006cf540) (1) Data frame sent\nI0402 00:44:57.975152 3072 log.go:172] (0xc0009422c0) (0xc0006cf540) Stream removed, broadcasting: 1\nI0402 00:44:57.975174 3072 log.go:172] (0xc0009422c0) Go away received\nI0402 00:44:57.975509 3072 log.go:172] (0xc0009422c0) (0xc0006cf540) Stream removed, broadcasting: 1\nI0402 00:44:57.975527 3072 log.go:172] (0xc0009422c0) (0xc000a66000) Stream removed, broadcasting: 3\nI0402 00:44:57.975536 3072 log.go:172] (0xc0009422c0) (0xc000a660a0) Stream removed, broadcasting: 5\n" Apr 2 00:44:57.979: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 2 00:44:57.979: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 2 00:45:18.000: INFO: Waiting for StatefulSet statefulset-2656/ss2 to complete update Apr 2 00:45:18.000: INFO: Waiting for Pod statefulset-2656/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 2 00:45:28.010: INFO: Deleting all statefulset in ns statefulset-2656 Apr 2 00:45:28.012: INFO: Scaling statefulset ss2 to 0 Apr 2 00:45:38.049: INFO: Waiting for statefulset status.replicas updated to 0 Apr 2 00:45:38.052: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:45:38.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2656" for this suite. • [SLOW TEST:121.395 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":240,"skipped":4073,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:45:38.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-projected-6gbt STEP: Creating a pod to test atomic-volume-subpath Apr 2 00:45:38.230: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-6gbt" in namespace "subpath-71" to be "Succeeded or Failed" Apr 2 00:45:38.237: INFO: Pod "pod-subpath-test-projected-6gbt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.593313ms Apr 2 00:45:40.255: INFO: Pod "pod-subpath-test-projected-6gbt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024572705s Apr 2 00:45:42.258: INFO: Pod "pod-subpath-test-projected-6gbt": Phase="Running", Reason="", readiness=true. Elapsed: 4.028303272s Apr 2 00:45:44.262: INFO: Pod "pod-subpath-test-projected-6gbt": Phase="Running", Reason="", readiness=true. Elapsed: 6.032018928s Apr 2 00:45:46.266: INFO: Pod "pod-subpath-test-projected-6gbt": Phase="Running", Reason="", readiness=true. Elapsed: 8.036240676s Apr 2 00:45:48.270: INFO: Pod "pod-subpath-test-projected-6gbt": Phase="Running", Reason="", readiness=true. Elapsed: 10.03993771s Apr 2 00:45:50.274: INFO: Pod "pod-subpath-test-projected-6gbt": Phase="Running", Reason="", readiness=true. Elapsed: 12.044001055s Apr 2 00:45:52.278: INFO: Pod "pod-subpath-test-projected-6gbt": Phase="Running", Reason="", readiness=true. Elapsed: 14.048478897s Apr 2 00:45:54.282: INFO: Pod "pod-subpath-test-projected-6gbt": Phase="Running", Reason="", readiness=true. Elapsed: 16.052487524s Apr 2 00:45:56.286: INFO: Pod "pod-subpath-test-projected-6gbt": Phase="Running", Reason="", readiness=true. Elapsed: 18.056027902s Apr 2 00:45:58.290: INFO: Pod "pod-subpath-test-projected-6gbt": Phase="Running", Reason="", readiness=true. Elapsed: 20.060484272s Apr 2 00:46:00.294: INFO: Pod "pod-subpath-test-projected-6gbt": Phase="Running", Reason="", readiness=true. Elapsed: 22.064508545s Apr 2 00:46:02.299: INFO: Pod "pod-subpath-test-projected-6gbt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.068869503s STEP: Saw pod success Apr 2 00:46:02.299: INFO: Pod "pod-subpath-test-projected-6gbt" satisfied condition "Succeeded or Failed" Apr 2 00:46:02.302: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-projected-6gbt container test-container-subpath-projected-6gbt: STEP: delete the pod Apr 2 00:46:02.350: INFO: Waiting for pod pod-subpath-test-projected-6gbt to disappear Apr 2 00:46:02.368: INFO: Pod pod-subpath-test-projected-6gbt no longer exists STEP: Deleting pod pod-subpath-test-projected-6gbt Apr 2 00:46:02.368: INFO: Deleting pod "pod-subpath-test-projected-6gbt" in namespace "subpath-71" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:46:02.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-71" for this suite. • [SLOW TEST:24.266 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":241,"skipped":4091,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:46:02.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 2 00:46:03.076: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 2 00:46:05.138: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721385163, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721385163, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721385163, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721385163, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 2 00:46:08.166: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:46:08.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6251" for this suite. STEP: Destroying namespace "webhook-6251-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.986 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":242,"skipped":4103,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:46:08.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating replication controller my-hostname-basic-e9c3e222-8b1e-464f-91c5-f8070638c2eb Apr 2 00:46:08.422: INFO: Pod name my-hostname-basic-e9c3e222-8b1e-464f-91c5-f8070638c2eb: Found 0 pods out of 1 Apr 2 00:46:13.426: INFO: Pod name my-hostname-basic-e9c3e222-8b1e-464f-91c5-f8070638c2eb: Found 1 pods out of 1 Apr 2 00:46:13.426: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-e9c3e222-8b1e-464f-91c5-f8070638c2eb" are running Apr 2 00:46:13.430: INFO: Pod "my-hostname-basic-e9c3e222-8b1e-464f-91c5-f8070638c2eb-w28w7" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-02 00:46:08 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-02 00:46:11 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-02 00:46:11 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-02 00:46:08 +0000 UTC Reason: Message:}]) Apr 2 00:46:13.430: INFO: Trying to dial the pod Apr 2 00:46:18.442: INFO: Controller my-hostname-basic-e9c3e222-8b1e-464f-91c5-f8070638c2eb: Got expected result from replica 1 [my-hostname-basic-e9c3e222-8b1e-464f-91c5-f8070638c2eb-w28w7]: "my-hostname-basic-e9c3e222-8b1e-464f-91c5-f8070638c2eb-w28w7", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:46:18.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6024" for this suite. • [SLOW TEST:10.088 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":243,"skipped":4123,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:46:18.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 2 00:46:18.545: INFO: Waiting up to 5m0s for pod "downwardapi-volume-545cad38-991e-42a3-aa3b-21028ae0d554" in namespace "projected-5856" to be "Succeeded or Failed" Apr 2 00:46:18.548: INFO: Pod "downwardapi-volume-545cad38-991e-42a3-aa3b-21028ae0d554": Phase="Pending", Reason="", readiness=false. Elapsed: 3.239015ms Apr 2 00:46:20.553: INFO: Pod "downwardapi-volume-545cad38-991e-42a3-aa3b-21028ae0d554": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008009856s Apr 2 00:46:22.557: INFO: Pod "downwardapi-volume-545cad38-991e-42a3-aa3b-21028ae0d554": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011742192s STEP: Saw pod success Apr 2 00:46:22.557: INFO: Pod "downwardapi-volume-545cad38-991e-42a3-aa3b-21028ae0d554" satisfied condition "Succeeded or Failed" Apr 2 00:46:22.560: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-545cad38-991e-42a3-aa3b-21028ae0d554 container client-container: STEP: delete the pod Apr 2 00:46:22.609: INFO: Waiting for pod downwardapi-volume-545cad38-991e-42a3-aa3b-21028ae0d554 to disappear Apr 2 00:46:22.641: INFO: Pod downwardapi-volume-545cad38-991e-42a3-aa3b-21028ae0d554 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:46:22.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5856" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":244,"skipped":4144,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:46:22.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-358 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-358 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-358 Apr 2 00:46:22.716: INFO: Found 0 stateful pods, waiting for 1 Apr 2 00:46:32.721: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 2 00:46:32.725: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-358 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 2 00:46:33.007: INFO: stderr: "I0402 00:46:32.861775 3095 log.go:172] (0xc0005424d0) (0xc000663400) Create stream\nI0402 00:46:32.861840 3095 log.go:172] (0xc0005424d0) (0xc000663400) Stream added, broadcasting: 1\nI0402 00:46:32.864945 3095 log.go:172] (0xc0005424d0) Reply frame received for 1\nI0402 00:46:32.864973 3095 log.go:172] (0xc0005424d0) (0xc0006aa000) Create stream\nI0402 00:46:32.864980 3095 log.go:172] (0xc0005424d0) (0xc0006aa000) Stream added, broadcasting: 3\nI0402 00:46:32.865936 3095 log.go:172] (0xc0005424d0) Reply frame received for 3\nI0402 00:46:32.865967 3095 log.go:172] (0xc0005424d0) (0xc0004c6000) Create stream\nI0402 00:46:32.865981 3095 log.go:172] (0xc0005424d0) (0xc0004c6000) Stream added, broadcasting: 5\nI0402 00:46:32.867138 3095 log.go:172] (0xc0005424d0) Reply frame received for 5\nI0402 00:46:32.957615 3095 log.go:172] (0xc0005424d0) Data frame received for 5\nI0402 00:46:32.957646 3095 log.go:172] (0xc0004c6000) (5) Data frame handling\nI0402 00:46:32.957667 3095 log.go:172] (0xc0004c6000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0402 00:46:33.000469 3095 log.go:172] (0xc0005424d0) Data frame received for 3\nI0402 00:46:33.000498 3095 log.go:172] (0xc0006aa000) (3) Data frame handling\nI0402 00:46:33.000524 3095 log.go:172] (0xc0006aa000) (3) Data frame sent\nI0402 00:46:33.000532 3095 log.go:172] (0xc0005424d0) Data frame received for 3\nI0402 00:46:33.000538 3095 log.go:172] (0xc0006aa000) (3) Data frame handling\nI0402 00:46:33.000584 3095 log.go:172] (0xc0005424d0) Data frame received for 5\nI0402 00:46:33.000620 3095 log.go:172] (0xc0004c6000) (5) Data frame handling\nI0402 00:46:33.002614 3095 log.go:172] (0xc0005424d0) Data frame received for 1\nI0402 00:46:33.002632 3095 log.go:172] (0xc000663400) (1) Data frame handling\nI0402 00:46:33.002645 3095 log.go:172] (0xc000663400) (1) Data frame sent\nI0402 00:46:33.002668 3095 log.go:172] (0xc0005424d0) (0xc000663400) Stream removed, broadcasting: 1\nI0402 00:46:33.002939 3095 log.go:172] (0xc0005424d0) (0xc000663400) Stream removed, broadcasting: 1\nI0402 00:46:33.002981 3095 log.go:172] (0xc0005424d0) Go away received\nI0402 00:46:33.003022 3095 log.go:172] (0xc0005424d0) (0xc0006aa000) Stream removed, broadcasting: 3\nI0402 00:46:33.003047 3095 log.go:172] (0xc0005424d0) (0xc0004c6000) Stream removed, broadcasting: 5\n" Apr 2 00:46:33.007: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 2 00:46:33.007: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 2 00:46:33.019: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 2 00:46:43.023: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 2 00:46:43.023: INFO: Waiting for statefulset status.replicas updated to 0 Apr 2 00:46:43.040: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999433s Apr 2 00:46:44.044: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.990850168s Apr 2 00:46:45.050: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.986214183s Apr 2 00:46:46.054: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.981221798s Apr 2 00:46:47.059: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.977096775s Apr 2 00:46:48.063: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.972266602s Apr 2 00:46:49.068: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.967774709s Apr 2 00:46:50.072: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.963058909s Apr 2 00:46:51.077: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.958634874s Apr 2 00:46:52.082: INFO: Verifying statefulset ss doesn't scale past 1 for another 953.731587ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-358 Apr 2 00:46:53.086: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-358 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 2 00:46:53.326: INFO: stderr: "I0402 00:46:53.221467 3118 log.go:172] (0xc000af4000) (0xc0009c4000) Create stream\nI0402 00:46:53.221544 3118 log.go:172] (0xc000af4000) (0xc0009c4000) Stream added, broadcasting: 1\nI0402 00:46:53.224141 3118 log.go:172] (0xc000af4000) Reply frame received for 1\nI0402 00:46:53.224176 3118 log.go:172] (0xc000af4000) (0xc0009c40a0) Create stream\nI0402 00:46:53.224185 3118 log.go:172] (0xc000af4000) (0xc0009c40a0) Stream added, broadcasting: 3\nI0402 00:46:53.225251 3118 log.go:172] (0xc000af4000) Reply frame received for 3\nI0402 00:46:53.225288 3118 log.go:172] (0xc000af4000) (0xc0009c4140) Create stream\nI0402 00:46:53.225305 3118 log.go:172] (0xc000af4000) (0xc0009c4140) Stream added, broadcasting: 5\nI0402 00:46:53.226118 3118 log.go:172] (0xc000af4000) Reply frame received for 5\nI0402 00:46:53.320040 3118 log.go:172] (0xc000af4000) Data frame received for 5\nI0402 00:46:53.320076 3118 log.go:172] (0xc0009c4140) (5) Data frame handling\nI0402 00:46:53.320090 3118 log.go:172] (0xc0009c4140) (5) Data frame sent\nI0402 00:46:53.320097 3118 log.go:172] (0xc000af4000) Data frame received for 5\nI0402 00:46:53.320102 3118 log.go:172] (0xc0009c4140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0402 00:46:53.320120 3118 log.go:172] (0xc000af4000) Data frame received for 3\nI0402 00:46:53.320128 3118 log.go:172] (0xc0009c40a0) (3) Data frame handling\nI0402 00:46:53.320147 3118 log.go:172] (0xc0009c40a0) (3) Data frame sent\nI0402 00:46:53.320156 3118 log.go:172] (0xc000af4000) Data frame received for 3\nI0402 00:46:53.320164 3118 log.go:172] (0xc0009c40a0) (3) Data frame handling\nI0402 00:46:53.321626 3118 log.go:172] (0xc000af4000) Data frame received for 1\nI0402 00:46:53.321648 3118 log.go:172] (0xc0009c4000) (1) Data frame handling\nI0402 00:46:53.321668 3118 log.go:172] (0xc0009c4000) (1) Data frame sent\nI0402 00:46:53.321785 3118 log.go:172] (0xc000af4000) (0xc0009c4000) Stream removed, broadcasting: 1\nI0402 00:46:53.321896 3118 log.go:172] (0xc000af4000) Go away received\nI0402 00:46:53.322064 3118 log.go:172] (0xc000af4000) (0xc0009c4000) Stream removed, broadcasting: 1\nI0402 00:46:53.322078 3118 log.go:172] (0xc000af4000) (0xc0009c40a0) Stream removed, broadcasting: 3\nI0402 00:46:53.322085 3118 log.go:172] (0xc000af4000) (0xc0009c4140) Stream removed, broadcasting: 5\n" Apr 2 00:46:53.326: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 2 00:46:53.326: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 2 00:46:53.329: INFO: Found 1 stateful pods, waiting for 3 Apr 2 00:47:03.334: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 2 00:47:03.335: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 2 00:47:03.335: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 2 00:47:03.340: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-358 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 2 00:47:03.564: INFO: stderr: "I0402 00:47:03.459741 3141 log.go:172] (0xc000ab13f0) (0xc000a705a0) Create stream\nI0402 00:47:03.459833 3141 log.go:172] (0xc000ab13f0) (0xc000a705a0) Stream added, broadcasting: 1\nI0402 00:47:03.464371 3141 log.go:172] (0xc000ab13f0) Reply frame received for 1\nI0402 00:47:03.464418 3141 log.go:172] (0xc000ab13f0) (0xc00069f680) Create stream\nI0402 00:47:03.464433 3141 log.go:172] (0xc000ab13f0) (0xc00069f680) Stream added, broadcasting: 3\nI0402 00:47:03.465305 3141 log.go:172] (0xc000ab13f0) Reply frame received for 3\nI0402 00:47:03.465342 3141 log.go:172] (0xc000ab13f0) (0xc000536aa0) Create stream\nI0402 00:47:03.465351 3141 log.go:172] (0xc000ab13f0) (0xc000536aa0) Stream added, broadcasting: 5\nI0402 00:47:03.466027 3141 log.go:172] (0xc000ab13f0) Reply frame received for 5\nI0402 00:47:03.557399 3141 log.go:172] (0xc000ab13f0) Data frame received for 3\nI0402 00:47:03.557449 3141 log.go:172] (0xc000ab13f0) Data frame received for 5\nI0402 00:47:03.557470 3141 log.go:172] (0xc000536aa0) (5) Data frame handling\nI0402 00:47:03.557486 3141 log.go:172] (0xc000536aa0) (5) Data frame sent\nI0402 00:47:03.557494 3141 log.go:172] (0xc000ab13f0) Data frame received for 5\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0402 00:47:03.557517 3141 log.go:172] (0xc00069f680) (3) Data frame handling\nI0402 00:47:03.557560 3141 log.go:172] (0xc00069f680) (3) Data frame sent\nI0402 00:47:03.557585 3141 log.go:172] (0xc000536aa0) (5) Data frame handling\nI0402 00:47:03.557647 3141 log.go:172] (0xc000ab13f0) Data frame received for 3\nI0402 00:47:03.557671 3141 log.go:172] (0xc00069f680) (3) Data frame handling\nI0402 00:47:03.559156 3141 log.go:172] (0xc000ab13f0) Data frame received for 1\nI0402 00:47:03.559181 3141 log.go:172] (0xc000a705a0) (1) Data frame handling\nI0402 00:47:03.559195 3141 log.go:172] (0xc000a705a0) (1) Data frame sent\nI0402 00:47:03.559238 3141 log.go:172] (0xc000ab13f0) (0xc000a705a0) Stream removed, broadcasting: 1\nI0402 00:47:03.559271 3141 log.go:172] (0xc000ab13f0) Go away received\nI0402 00:47:03.559756 3141 log.go:172] (0xc000ab13f0) (0xc000a705a0) Stream removed, broadcasting: 1\nI0402 00:47:03.559781 3141 log.go:172] (0xc000ab13f0) (0xc00069f680) Stream removed, broadcasting: 3\nI0402 00:47:03.559793 3141 log.go:172] (0xc000ab13f0) (0xc000536aa0) Stream removed, broadcasting: 5\n" Apr 2 00:47:03.564: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 2 00:47:03.564: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 2 00:47:03.564: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-358 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 2 00:47:03.808: INFO: stderr: "I0402 00:47:03.689039 3161 log.go:172] (0xc000874000) (0xc0006d5180) Create stream\nI0402 00:47:03.689238 3161 log.go:172] (0xc000874000) (0xc0006d5180) Stream added, broadcasting: 1\nI0402 00:47:03.692388 3161 log.go:172] (0xc000874000) Reply frame received for 1\nI0402 00:47:03.692438 3161 log.go:172] (0xc000874000) (0xc0006d5220) Create stream\nI0402 00:47:03.692454 3161 log.go:172] (0xc000874000) (0xc0006d5220) Stream added, broadcasting: 3\nI0402 00:47:03.693593 3161 log.go:172] (0xc000874000) Reply frame received for 3\nI0402 00:47:03.693627 3161 log.go:172] (0xc000874000) (0xc0006d52c0) Create stream\nI0402 00:47:03.693643 3161 log.go:172] (0xc000874000) (0xc0006d52c0) Stream added, broadcasting: 5\nI0402 00:47:03.694586 3161 log.go:172] (0xc000874000) Reply frame received for 5\nI0402 00:47:03.775816 3161 log.go:172] (0xc000874000) Data frame received for 5\nI0402 00:47:03.775847 3161 log.go:172] (0xc0006d52c0) (5) Data frame handling\nI0402 00:47:03.775862 3161 log.go:172] (0xc0006d52c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0402 00:47:03.799786 3161 log.go:172] (0xc000874000) Data frame received for 3\nI0402 00:47:03.799798 3161 log.go:172] (0xc0006d5220) (3) Data frame handling\nI0402 00:47:03.799812 3161 log.go:172] (0xc0006d5220) (3) Data frame sent\nI0402 00:47:03.800306 3161 log.go:172] (0xc000874000) Data frame received for 5\nI0402 00:47:03.800337 3161 log.go:172] (0xc0006d52c0) (5) Data frame handling\nI0402 00:47:03.800472 3161 log.go:172] (0xc000874000) Data frame received for 3\nI0402 00:47:03.800483 3161 log.go:172] (0xc0006d5220) (3) Data frame handling\nI0402 00:47:03.802724 3161 log.go:172] (0xc000874000) Data frame received for 1\nI0402 00:47:03.802759 3161 log.go:172] (0xc0006d5180) (1) Data frame handling\nI0402 00:47:03.802795 3161 log.go:172] (0xc0006d5180) (1) Data frame sent\nI0402 00:47:03.802829 3161 log.go:172] (0xc000874000) (0xc0006d5180) Stream removed, broadcasting: 1\nI0402 00:47:03.802997 3161 log.go:172] (0xc000874000) Go away received\nI0402 00:47:03.803225 3161 log.go:172] (0xc000874000) (0xc0006d5180) Stream removed, broadcasting: 1\nI0402 00:47:03.803241 3161 log.go:172] (0xc000874000) (0xc0006d5220) Stream removed, broadcasting: 3\nI0402 00:47:03.803249 3161 log.go:172] (0xc000874000) (0xc0006d52c0) Stream removed, broadcasting: 5\n" Apr 2 00:47:03.808: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 2 00:47:03.808: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 2 00:47:03.808: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-358 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 2 00:47:04.033: INFO: stderr: "I0402 00:47:03.930583 3178 log.go:172] (0xc0009a2f20) (0xc0009ec500) Create stream\nI0402 00:47:03.930640 3178 log.go:172] (0xc0009a2f20) (0xc0009ec500) Stream added, broadcasting: 1\nI0402 00:47:03.935924 3178 log.go:172] (0xc0009a2f20) Reply frame received for 1\nI0402 00:47:03.935976 3178 log.go:172] (0xc0009a2f20) (0xc00057d860) Create stream\nI0402 00:47:03.935999 3178 log.go:172] (0xc0009a2f20) (0xc00057d860) Stream added, broadcasting: 3\nI0402 00:47:03.937297 3178 log.go:172] (0xc0009a2f20) Reply frame received for 3\nI0402 00:47:03.937421 3178 log.go:172] (0xc0009a2f20) (0xc000678c80) Create stream\nI0402 00:47:03.937477 3178 log.go:172] (0xc0009a2f20) (0xc000678c80) Stream added, broadcasting: 5\nI0402 00:47:03.938439 3178 log.go:172] (0xc0009a2f20) Reply frame received for 5\nI0402 00:47:04.001292 3178 log.go:172] (0xc0009a2f20) Data frame received for 5\nI0402 00:47:04.001325 3178 log.go:172] (0xc000678c80) (5) Data frame handling\nI0402 00:47:04.001347 3178 log.go:172] (0xc000678c80) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0402 00:47:04.025284 3178 log.go:172] (0xc0009a2f20) Data frame received for 3\nI0402 00:47:04.025300 3178 log.go:172] (0xc00057d860) (3) Data frame handling\nI0402 00:47:04.025307 3178 log.go:172] (0xc00057d860) (3) Data frame sent\nI0402 00:47:04.025819 3178 log.go:172] (0xc0009a2f20) Data frame received for 5\nI0402 00:47:04.025837 3178 log.go:172] (0xc000678c80) (5) Data frame handling\nI0402 00:47:04.025893 3178 log.go:172] (0xc0009a2f20) Data frame received for 3\nI0402 00:47:04.025906 3178 log.go:172] (0xc00057d860) (3) Data frame handling\nI0402 00:47:04.028409 3178 log.go:172] (0xc0009a2f20) Data frame received for 1\nI0402 00:47:04.028452 3178 log.go:172] (0xc0009ec500) (1) Data frame handling\nI0402 00:47:04.028493 3178 log.go:172] (0xc0009ec500) (1) Data frame sent\nI0402 00:47:04.028538 3178 log.go:172] (0xc0009a2f20) (0xc0009ec500) Stream removed, broadcasting: 1\nI0402 00:47:04.028560 3178 log.go:172] (0xc0009a2f20) Go away received\nI0402 00:47:04.028850 3178 log.go:172] (0xc0009a2f20) (0xc0009ec500) Stream removed, broadcasting: 1\nI0402 00:47:04.028867 3178 log.go:172] (0xc0009a2f20) (0xc00057d860) Stream removed, broadcasting: 3\nI0402 00:47:04.028879 3178 log.go:172] (0xc0009a2f20) (0xc000678c80) Stream removed, broadcasting: 5\n" Apr 2 00:47:04.033: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 2 00:47:04.033: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 2 00:47:04.033: INFO: Waiting for statefulset status.replicas updated to 0 Apr 2 00:47:04.037: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 2 00:47:14.059: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 2 00:47:14.059: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 2 00:47:14.059: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 2 00:47:14.070: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999306s Apr 2 00:47:15.076: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.99507319s Apr 2 00:47:16.080: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.989787235s Apr 2 00:47:17.086: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.984864822s Apr 2 00:47:18.104: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.979807613s Apr 2 00:47:19.109: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.961444492s Apr 2 00:47:20.113: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.9562726s Apr 2 00:47:21.118: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.951957964s Apr 2 00:47:22.134: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.947304834s Apr 2 00:47:23.139: INFO: Verifying statefulset ss doesn't scale past 3 for another 931.500016ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-358 Apr 2 00:47:24.143: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-358 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 2 00:47:24.378: INFO: stderr: "I0402 00:47:24.275708 3198 log.go:172] (0xc00003a790) (0xc000534000) Create stream\nI0402 00:47:24.275757 3198 log.go:172] (0xc00003a790) (0xc000534000) Stream added, broadcasting: 1\nI0402 00:47:24.278556 3198 log.go:172] (0xc00003a790) Reply frame received for 1\nI0402 00:47:24.278620 3198 log.go:172] (0xc00003a790) (0xc000534140) Create stream\nI0402 00:47:24.278639 3198 log.go:172] (0xc00003a790) (0xc000534140) Stream added, broadcasting: 3\nI0402 00:47:24.279691 3198 log.go:172] (0xc00003a790) Reply frame received for 3\nI0402 00:47:24.279723 3198 log.go:172] (0xc00003a790) (0xc0005341e0) Create stream\nI0402 00:47:24.279730 3198 log.go:172] (0xc00003a790) (0xc0005341e0) Stream added, broadcasting: 5\nI0402 00:47:24.280735 3198 log.go:172] (0xc00003a790) Reply frame received for 5\nI0402 00:47:24.371721 3198 log.go:172] (0xc00003a790) Data frame received for 5\nI0402 00:47:24.371780 3198 log.go:172] (0xc0005341e0) (5) Data frame handling\nI0402 00:47:24.371804 3198 log.go:172] (0xc0005341e0) (5) Data frame sent\nI0402 00:47:24.371821 3198 log.go:172] (0xc00003a790) Data frame received for 5\nI0402 00:47:24.371837 3198 log.go:172] (0xc0005341e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0402 00:47:24.371876 3198 log.go:172] (0xc00003a790) Data frame received for 3\nI0402 00:47:24.371895 3198 log.go:172] (0xc000534140) (3) Data frame handling\nI0402 00:47:24.371930 3198 log.go:172] (0xc000534140) (3) Data frame sent\nI0402 00:47:24.371954 3198 log.go:172] (0xc00003a790) Data frame received for 3\nI0402 00:47:24.371971 3198 log.go:172] (0xc000534140) (3) Data frame handling\nI0402 00:47:24.373061 3198 log.go:172] (0xc00003a790) Data frame received for 1\nI0402 00:47:24.373088 3198 log.go:172] (0xc000534000) (1) Data frame handling\nI0402 00:47:24.373103 3198 log.go:172] (0xc000534000) (1) Data frame sent\nI0402 00:47:24.373268 3198 log.go:172] (0xc00003a790) (0xc000534000) Stream removed, broadcasting: 1\nI0402 00:47:24.373313 3198 log.go:172] (0xc00003a790) Go away received\nI0402 00:47:24.373675 3198 log.go:172] (0xc00003a790) (0xc000534000) Stream removed, broadcasting: 1\nI0402 00:47:24.373695 3198 log.go:172] (0xc00003a790) (0xc000534140) Stream removed, broadcasting: 3\nI0402 00:47:24.373706 3198 log.go:172] (0xc00003a790) (0xc0005341e0) Stream removed, broadcasting: 5\n" Apr 2 00:47:24.378: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 2 00:47:24.378: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 2 00:47:24.378: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-358 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 2 00:47:24.616: INFO: stderr: "I0402 00:47:24.552174 3220 log.go:172] (0xc000742b00) (0xc00073c280) Create stream\nI0402 00:47:24.552223 3220 log.go:172] (0xc000742b00) (0xc00073c280) Stream added, broadcasting: 1\nI0402 00:47:24.555591 3220 log.go:172] (0xc000742b00) Reply frame received for 1\nI0402 00:47:24.555650 3220 log.go:172] (0xc000742b00) (0xc00073c320) Create stream\nI0402 00:47:24.555665 3220 log.go:172] (0xc000742b00) (0xc00073c320) Stream added, broadcasting: 3\nI0402 00:47:24.556761 3220 log.go:172] (0xc000742b00) Reply frame received for 3\nI0402 00:47:24.556832 3220 log.go:172] (0xc000742b00) (0xc000628000) Create stream\nI0402 00:47:24.556864 3220 log.go:172] (0xc000742b00) (0xc000628000) Stream added, broadcasting: 5\nI0402 00:47:24.558096 3220 log.go:172] (0xc000742b00) Reply frame received for 5\nI0402 00:47:24.609212 3220 log.go:172] (0xc000742b00) Data frame received for 3\nI0402 00:47:24.609252 3220 log.go:172] (0xc00073c320) (3) Data frame handling\nI0402 00:47:24.609290 3220 log.go:172] (0xc00073c320) (3) Data frame sent\nI0402 00:47:24.609322 3220 log.go:172] (0xc000742b00) Data frame received for 3\nI0402 00:47:24.609334 3220 log.go:172] (0xc00073c320) (3) Data frame handling\nI0402 00:47:24.609589 3220 log.go:172] (0xc000742b00) Data frame received for 5\nI0402 00:47:24.609617 3220 log.go:172] (0xc000628000) (5) Data frame handling\nI0402 00:47:24.609643 3220 log.go:172] (0xc000628000) (5) Data frame sent\nI0402 00:47:24.609660 3220 log.go:172] (0xc000742b00) Data frame received for 5\nI0402 00:47:24.609671 3220 log.go:172] (0xc000628000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0402 00:47:24.611078 3220 log.go:172] (0xc000742b00) Data frame received for 1\nI0402 00:47:24.611103 3220 log.go:172] (0xc00073c280) (1) Data frame handling\nI0402 00:47:24.611122 3220 log.go:172] (0xc00073c280) (1) Data frame sent\nI0402 00:47:24.611143 3220 log.go:172] (0xc000742b00) (0xc00073c280) Stream removed, broadcasting: 1\nI0402 00:47:24.611216 3220 log.go:172] (0xc000742b00) Go away received\nI0402 00:47:24.611538 3220 log.go:172] (0xc000742b00) (0xc00073c280) Stream removed, broadcasting: 1\nI0402 00:47:24.611560 3220 log.go:172] (0xc000742b00) (0xc00073c320) Stream removed, broadcasting: 3\nI0402 00:47:24.611573 3220 log.go:172] (0xc000742b00) (0xc000628000) Stream removed, broadcasting: 5\n" Apr 2 00:47:24.616: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 2 00:47:24.616: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 2 00:47:24.616: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-358 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 2 00:47:24.859: INFO: stderr: "I0402 00:47:24.748438 3243 log.go:172] (0xc00003a6e0) (0xc0006d5180) Create stream\nI0402 00:47:24.748503 3243 log.go:172] (0xc00003a6e0) (0xc0006d5180) Stream added, broadcasting: 1\nI0402 00:47:24.752241 3243 log.go:172] (0xc00003a6e0) Reply frame received for 1\nI0402 00:47:24.752294 3243 log.go:172] (0xc00003a6e0) (0xc0009ea000) Create stream\nI0402 00:47:24.752317 3243 log.go:172] (0xc00003a6e0) (0xc0009ea000) Stream added, broadcasting: 3\nI0402 00:47:24.753511 3243 log.go:172] (0xc00003a6e0) Reply frame received for 3\nI0402 00:47:24.753566 3243 log.go:172] (0xc00003a6e0) (0xc0006d5360) Create stream\nI0402 00:47:24.753580 3243 log.go:172] (0xc00003a6e0) (0xc0006d5360) Stream added, broadcasting: 5\nI0402 00:47:24.756018 3243 log.go:172] (0xc00003a6e0) Reply frame received for 5\nI0402 00:47:24.852795 3243 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0402 00:47:24.852845 3243 log.go:172] (0xc0009ea000) (3) Data frame handling\nI0402 00:47:24.852862 3243 log.go:172] (0xc0009ea000) (3) Data frame sent\nI0402 00:47:24.852872 3243 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0402 00:47:24.852882 3243 log.go:172] (0xc0009ea000) (3) Data frame handling\nI0402 00:47:24.853769 3243 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0402 00:47:24.853800 3243 log.go:172] (0xc0006d5360) (5) Data frame handling\nI0402 00:47:24.853813 3243 log.go:172] (0xc0006d5360) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0402 00:47:24.853894 3243 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0402 00:47:24.853924 3243 log.go:172] (0xc0006d5360) (5) Data frame handling\nI0402 00:47:24.855616 3243 log.go:172] (0xc00003a6e0) Data frame received for 1\nI0402 00:47:24.855640 3243 log.go:172] (0xc0006d5180) (1) Data frame handling\nI0402 00:47:24.855651 3243 log.go:172] (0xc0006d5180) (1) Data frame sent\nI0402 00:47:24.855730 3243 log.go:172] (0xc00003a6e0) (0xc0006d5180) Stream removed, broadcasting: 1\nI0402 00:47:24.855753 3243 log.go:172] (0xc00003a6e0) Go away received\nI0402 00:47:24.856180 3243 log.go:172] (0xc00003a6e0) (0xc0006d5180) Stream removed, broadcasting: 1\nI0402 00:47:24.856213 3243 log.go:172] (0xc00003a6e0) (0xc0009ea000) Stream removed, broadcasting: 3\nI0402 00:47:24.856235 3243 log.go:172] (0xc00003a6e0) (0xc0006d5360) Stream removed, broadcasting: 5\n" Apr 2 00:47:24.860: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 2 00:47:24.860: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 2 00:47:24.860: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 2 00:47:44.875: INFO: Deleting all statefulset in ns statefulset-358 Apr 2 00:47:44.878: INFO: Scaling statefulset ss to 0 Apr 2 00:47:44.886: INFO: Waiting for statefulset status.replicas updated to 0 Apr 2 00:47:44.889: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:47:44.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-358" for this suite. • [SLOW TEST:82.287 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":245,"skipped":4158,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:47:44.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 2 00:47:45.007: INFO: Waiting up to 5m0s for pod "downwardapi-volume-af6b539e-53ce-4f8d-ad5e-c600172d56fc" in namespace "projected-9841" to be "Succeeded or Failed" Apr 2 00:47:45.021: INFO: Pod "downwardapi-volume-af6b539e-53ce-4f8d-ad5e-c600172d56fc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.021432ms Apr 2 00:47:47.032: INFO: Pod "downwardapi-volume-af6b539e-53ce-4f8d-ad5e-c600172d56fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024863913s Apr 2 00:47:49.036: INFO: Pod "downwardapi-volume-af6b539e-53ce-4f8d-ad5e-c600172d56fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029574618s STEP: Saw pod success Apr 2 00:47:49.037: INFO: Pod "downwardapi-volume-af6b539e-53ce-4f8d-ad5e-c600172d56fc" satisfied condition "Succeeded or Failed" Apr 2 00:47:49.039: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-af6b539e-53ce-4f8d-ad5e-c600172d56fc container client-container: STEP: delete the pod Apr 2 00:47:49.072: INFO: Waiting for pod downwardapi-volume-af6b539e-53ce-4f8d-ad5e-c600172d56fc to disappear Apr 2 00:47:49.083: INFO: Pod downwardapi-volume-af6b539e-53ce-4f8d-ad5e-c600172d56fc no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:47:49.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9841" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":246,"skipped":4161,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:47:49.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-8877 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-8877 STEP: Creating statefulset with conflicting port in namespace statefulset-8877 STEP: Waiting until pod test-pod will start running in namespace statefulset-8877 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-8877 Apr 2 00:47:53.221: INFO: Observed stateful pod in namespace: statefulset-8877, name: ss-0, uid: 2f7a1d1b-a02a-40dc-b782-44fc347ee242, status phase: Failed. Waiting for statefulset controller to delete. Apr 2 00:47:53.223: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-8877 STEP: Removing pod with conflicting port in namespace statefulset-8877 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-8877 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 2 00:47:57.279: INFO: Deleting all statefulset in ns statefulset-8877 Apr 2 00:47:57.282: INFO: Scaling statefulset ss to 0 Apr 2 00:48:07.297: INFO: Waiting for statefulset status.replicas updated to 0 Apr 2 00:48:07.300: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:48:07.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8877" for this suite. • [SLOW TEST:18.229 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":247,"skipped":4206,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:48:07.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-808f0349-b884-4eed-bd56-6c2a30cc2c6a STEP: Creating a pod to test consume configMaps Apr 2 00:48:07.390: INFO: Waiting up to 5m0s for pod "pod-configmaps-0c153cb8-c20f-4230-8804-e5dc99a1c4d9" in namespace "configmap-7877" to be "Succeeded or Failed" Apr 2 00:48:07.394: INFO: Pod "pod-configmaps-0c153cb8-c20f-4230-8804-e5dc99a1c4d9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.962875ms Apr 2 00:48:09.399: INFO: Pod "pod-configmaps-0c153cb8-c20f-4230-8804-e5dc99a1c4d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008137338s Apr 2 00:48:11.403: INFO: Pod "pod-configmaps-0c153cb8-c20f-4230-8804-e5dc99a1c4d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012279859s STEP: Saw pod success Apr 2 00:48:11.403: INFO: Pod "pod-configmaps-0c153cb8-c20f-4230-8804-e5dc99a1c4d9" satisfied condition "Succeeded or Failed" Apr 2 00:48:11.407: INFO: Trying to get logs from node latest-worker pod pod-configmaps-0c153cb8-c20f-4230-8804-e5dc99a1c4d9 container configmap-volume-test: STEP: delete the pod Apr 2 00:48:11.450: INFO: Waiting for pod pod-configmaps-0c153cb8-c20f-4230-8804-e5dc99a1c4d9 to disappear Apr 2 00:48:11.461: INFO: Pod pod-configmaps-0c153cb8-c20f-4230-8804-e5dc99a1c4d9 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:48:11.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7877" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":248,"skipped":4235,"failed":0} SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:48:11.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 2 00:48:11.537: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:48:18.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4050" for this suite. • [SLOW TEST:6.684 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":249,"skipped":4238,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:48:18.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Apr 2 00:48:18.235: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Apr 2 00:48:28.628: INFO: >>> kubeConfig: /root/.kube/config Apr 2 00:48:31.543: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:48:42.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4190" for this suite. • [SLOW TEST:23.942 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":250,"skipped":4241,"failed":0} [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:48:42.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7407.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7407.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7407.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7407.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 2 00:48:48.259: INFO: DNS probes using dns-test-dbefe41e-3ba2-45ef-b487-d4735e57568e succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7407.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7407.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7407.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7407.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 2 00:48:54.386: INFO: File wheezy_udp@dns-test-service-3.dns-7407.svc.cluster.local from pod dns-7407/dns-test-b0f0c0b6-f6dd-4797-b679-57ea338546f3 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 2 00:48:54.390: INFO: File jessie_udp@dns-test-service-3.dns-7407.svc.cluster.local from pod dns-7407/dns-test-b0f0c0b6-f6dd-4797-b679-57ea338546f3 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 2 00:48:54.390: INFO: Lookups using dns-7407/dns-test-b0f0c0b6-f6dd-4797-b679-57ea338546f3 failed for: [wheezy_udp@dns-test-service-3.dns-7407.svc.cluster.local jessie_udp@dns-test-service-3.dns-7407.svc.cluster.local] Apr 2 00:48:59.395: INFO: File wheezy_udp@dns-test-service-3.dns-7407.svc.cluster.local from pod dns-7407/dns-test-b0f0c0b6-f6dd-4797-b679-57ea338546f3 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 2 00:48:59.399: INFO: File jessie_udp@dns-test-service-3.dns-7407.svc.cluster.local from pod dns-7407/dns-test-b0f0c0b6-f6dd-4797-b679-57ea338546f3 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 2 00:48:59.399: INFO: Lookups using dns-7407/dns-test-b0f0c0b6-f6dd-4797-b679-57ea338546f3 failed for: [wheezy_udp@dns-test-service-3.dns-7407.svc.cluster.local jessie_udp@dns-test-service-3.dns-7407.svc.cluster.local] Apr 2 00:49:04.395: INFO: File wheezy_udp@dns-test-service-3.dns-7407.svc.cluster.local from pod dns-7407/dns-test-b0f0c0b6-f6dd-4797-b679-57ea338546f3 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 2 00:49:04.399: INFO: File jessie_udp@dns-test-service-3.dns-7407.svc.cluster.local from pod dns-7407/dns-test-b0f0c0b6-f6dd-4797-b679-57ea338546f3 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 2 00:49:04.399: INFO: Lookups using dns-7407/dns-test-b0f0c0b6-f6dd-4797-b679-57ea338546f3 failed for: [wheezy_udp@dns-test-service-3.dns-7407.svc.cluster.local jessie_udp@dns-test-service-3.dns-7407.svc.cluster.local] Apr 2 00:49:09.395: INFO: File wheezy_udp@dns-test-service-3.dns-7407.svc.cluster.local from pod dns-7407/dns-test-b0f0c0b6-f6dd-4797-b679-57ea338546f3 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 2 00:49:09.399: INFO: File jessie_udp@dns-test-service-3.dns-7407.svc.cluster.local from pod dns-7407/dns-test-b0f0c0b6-f6dd-4797-b679-57ea338546f3 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 2 00:49:09.399: INFO: Lookups using dns-7407/dns-test-b0f0c0b6-f6dd-4797-b679-57ea338546f3 failed for: [wheezy_udp@dns-test-service-3.dns-7407.svc.cluster.local jessie_udp@dns-test-service-3.dns-7407.svc.cluster.local] Apr 2 00:49:14.394: INFO: File wheezy_udp@dns-test-service-3.dns-7407.svc.cluster.local from pod dns-7407/dns-test-b0f0c0b6-f6dd-4797-b679-57ea338546f3 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 2 00:49:14.398: INFO: File jessie_udp@dns-test-service-3.dns-7407.svc.cluster.local from pod dns-7407/dns-test-b0f0c0b6-f6dd-4797-b679-57ea338546f3 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 2 00:49:14.398: INFO: Lookups using dns-7407/dns-test-b0f0c0b6-f6dd-4797-b679-57ea338546f3 failed for: [wheezy_udp@dns-test-service-3.dns-7407.svc.cluster.local jessie_udp@dns-test-service-3.dns-7407.svc.cluster.local] Apr 2 00:49:19.399: INFO: DNS probes using dns-test-b0f0c0b6-f6dd-4797-b679-57ea338546f3 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7407.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7407.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7407.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7407.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 2 00:49:25.917: INFO: DNS probes using dns-test-f1cdee99-e474-4e0c-9fac-6c3c3addfadd succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:49:25.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7407" for this suite. • [SLOW TEST:43.908 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":251,"skipped":4241,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:49:26.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Apr 2 00:49:26.047: INFO: >>> kubeConfig: /root/.kube/config Apr 2 00:49:27.951: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:49:39.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-646" for this suite. • [SLOW TEST:13.405 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":252,"skipped":4268,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:49:39.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:49:52.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3356" for this suite. • [SLOW TEST:13.159 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":253,"skipped":4292,"failed":0} SSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:49:52.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 00:49:52.650: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-72ace747-1433-469e-b35b-a8f2ecba0969" in namespace "security-context-test-9683" to be "Succeeded or Failed" Apr 2 00:49:52.662: INFO: Pod "alpine-nnp-false-72ace747-1433-469e-b35b-a8f2ecba0969": Phase="Pending", Reason="", readiness=false. Elapsed: 12.026596ms Apr 2 00:49:54.666: INFO: Pod "alpine-nnp-false-72ace747-1433-469e-b35b-a8f2ecba0969": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016028708s Apr 2 00:49:56.670: INFO: Pod "alpine-nnp-false-72ace747-1433-469e-b35b-a8f2ecba0969": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020156908s Apr 2 00:49:56.670: INFO: Pod "alpine-nnp-false-72ace747-1433-469e-b35b-a8f2ecba0969" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:49:56.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9683" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":254,"skipped":4295,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:49:56.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 2 00:49:56.758: INFO: Waiting up to 5m0s for pod "downwardapi-volume-83c3d6c4-5211-4c34-a569-8f728a15d124" in namespace "downward-api-9590" to be "Succeeded or Failed" Apr 2 00:49:56.762: INFO: Pod "downwardapi-volume-83c3d6c4-5211-4c34-a569-8f728a15d124": Phase="Pending", Reason="", readiness=false. Elapsed: 4.393582ms Apr 2 00:49:58.766: INFO: Pod "downwardapi-volume-83c3d6c4-5211-4c34-a569-8f728a15d124": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008328665s Apr 2 00:50:00.769: INFO: Pod "downwardapi-volume-83c3d6c4-5211-4c34-a569-8f728a15d124": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01146378s STEP: Saw pod success Apr 2 00:50:00.769: INFO: Pod "downwardapi-volume-83c3d6c4-5211-4c34-a569-8f728a15d124" satisfied condition "Succeeded or Failed" Apr 2 00:50:00.772: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-83c3d6c4-5211-4c34-a569-8f728a15d124 container client-container: STEP: delete the pod Apr 2 00:50:00.804: INFO: Waiting for pod downwardapi-volume-83c3d6c4-5211-4c34-a569-8f728a15d124 to disappear Apr 2 00:50:00.816: INFO: Pod downwardapi-volume-83c3d6c4-5211-4c34-a569-8f728a15d124 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:50:00.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9590" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":255,"skipped":4311,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:50:00.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5294.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-5294.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5294.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-5294.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5294.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5294.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-5294.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5294.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-5294.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5294.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 2 00:50:06.940: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:06.944: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:06.947: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:06.950: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:06.960: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:06.963: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:06.966: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:06.969: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:06.975: INFO: Lookups using dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5294.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5294.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local jessie_udp@dns-test-service-2.dns-5294.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5294.svc.cluster.local] Apr 2 00:50:11.980: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:11.984: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:11.988: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:11.991: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:12.001: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:12.004: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:12.008: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:12.011: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:12.018: INFO: Lookups using dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5294.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5294.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local jessie_udp@dns-test-service-2.dns-5294.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5294.svc.cluster.local] Apr 2 00:50:16.987: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:16.990: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:16.993: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:16.996: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:17.004: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:17.007: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:17.010: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:17.012: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:17.022: INFO: Lookups using dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5294.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5294.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local jessie_udp@dns-test-service-2.dns-5294.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5294.svc.cluster.local] Apr 2 00:50:21.980: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:21.983: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:21.987: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:21.991: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:22.001: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:22.004: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:22.006: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:22.008: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:22.014: INFO: Lookups using dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5294.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5294.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local jessie_udp@dns-test-service-2.dns-5294.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5294.svc.cluster.local] Apr 2 00:50:26.980: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:26.984: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:26.987: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:26.990: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:26.999: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:27.002: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:27.005: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:27.008: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:27.013: INFO: Lookups using dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5294.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5294.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local jessie_udp@dns-test-service-2.dns-5294.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5294.svc.cluster.local] Apr 2 00:50:31.980: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:31.984: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:31.987: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:31.991: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:32.001: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:32.004: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:32.007: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:32.010: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5294.svc.cluster.local from pod dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f: the server could not find the requested resource (get pods dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f) Apr 2 00:50:32.017: INFO: Lookups using dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5294.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5294.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5294.svc.cluster.local jessie_udp@dns-test-service-2.dns-5294.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5294.svc.cluster.local] Apr 2 00:50:37.012: INFO: DNS probes using dns-5294/dns-test-5e85348a-ee29-47f0-acec-9b95d345ef7f succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:50:37.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5294" for this suite. • [SLOW TEST:36.783 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":256,"skipped":4337,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:50:37.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 00:50:37.724: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"f578a2bd-ed00-4d73-a83f-56baa6376219", Controller:(*bool)(0xc005dacf7a), BlockOwnerDeletion:(*bool)(0xc005dacf7b)}} Apr 2 00:50:37.746: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"1c1dada7-144e-44ef-b535-fa52daa3156f", Controller:(*bool)(0xc005e000ca), BlockOwnerDeletion:(*bool)(0xc005e000cb)}} Apr 2 00:50:37.781: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"c705516c-2356-4170-92c9-c916b59fa177", Controller:(*bool)(0xc005dd656a), BlockOwnerDeletion:(*bool)(0xc005dd656b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:50:42.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9969" for this suite. • [SLOW TEST:5.215 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":257,"skipped":4398,"failed":0} SSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:50:42.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:50:46.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9824" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":258,"skipped":4401,"failed":0} ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:50:46.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0402 00:50:57.074976 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 2 00:50:57.075: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:50:57.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3881" for this suite. • [SLOW TEST:10.101 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":259,"skipped":4401,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:50:57.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 2 00:50:57.346: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 2 00:50:57.356: INFO: Waiting for terminating namespaces to be deleted... Apr 2 00:50:57.359: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 2 00:50:57.364: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 2 00:50:57.364: INFO: Container kindnet-cni ready: true, restart count 0 Apr 2 00:50:57.364: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 2 00:50:57.364: INFO: Container kube-proxy ready: true, restart count 0 Apr 2 00:50:57.364: INFO: client-containers-ee9eee36-d568-485e-9e01-999aef441d93 from containers-9824 started at 2020-04-02 00:50:42 +0000 UTC (1 container statuses recorded) Apr 2 00:50:57.364: INFO: Container test-container ready: false, restart count 0 Apr 2 00:50:57.364: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 2 00:50:57.368: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 2 00:50:57.368: INFO: Container kindnet-cni ready: true, restart count 0 Apr 2 00:50:57.368: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 2 00:50:57.368: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-0a6fd728-7346-4610-a631-a2312f6bd292 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-0a6fd728-7346-4610-a631-a2312f6bd292 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-0a6fd728-7346-4610-a631-a2312f6bd292 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:51:05.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-401" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:8.513 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":275,"completed":260,"skipped":4472,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:51:05.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Apr 2 00:51:10.741: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:51:10.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6509" for this suite. • [SLOW TEST:5.328 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":261,"skipped":4507,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:51:10.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 2 00:51:10.964: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9800' Apr 2 00:51:11.062: INFO: stderr: "" Apr 2 00:51:11.062: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423 Apr 2 00:51:11.106: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9800' Apr 2 00:51:22.762: INFO: stderr: "" Apr 2 00:51:22.762: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:51:22.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9800" for this suite. • [SLOW TEST:11.850 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1414 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":275,"completed":262,"skipped":4521,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:51:22.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 2 00:51:22.869: INFO: Waiting up to 5m0s for pod "pod-791d2e19-92ff-4daa-9284-67c8e36deb46" in namespace "emptydir-9931" to be "Succeeded or Failed" Apr 2 00:51:22.872: INFO: Pod "pod-791d2e19-92ff-4daa-9284-67c8e36deb46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.907471ms Apr 2 00:51:24.876: INFO: Pod "pod-791d2e19-92ff-4daa-9284-67c8e36deb46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006876395s Apr 2 00:51:26.880: INFO: Pod "pod-791d2e19-92ff-4daa-9284-67c8e36deb46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011001622s STEP: Saw pod success Apr 2 00:51:26.880: INFO: Pod "pod-791d2e19-92ff-4daa-9284-67c8e36deb46" satisfied condition "Succeeded or Failed" Apr 2 00:51:26.903: INFO: Trying to get logs from node latest-worker2 pod pod-791d2e19-92ff-4daa-9284-67c8e36deb46 container test-container: STEP: delete the pod Apr 2 00:51:26.923: INFO: Waiting for pod pod-791d2e19-92ff-4daa-9284-67c8e36deb46 to disappear Apr 2 00:51:26.928: INFO: Pod pod-791d2e19-92ff-4daa-9284-67c8e36deb46 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:51:26.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9931" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":263,"skipped":4570,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:51:26.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 2 00:51:27.542: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 2 00:51:29.552: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721385487, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721385487, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721385487, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721385487, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 2 00:51:32.570: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 00:51:32.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:51:33.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3118" for this suite. STEP: Destroying namespace "webhook-3118-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.810 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":264,"skipped":4587,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:51:33.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 2 00:51:35.094: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 2 00:51:37.105: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721385495, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721385495, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721385495, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721385495, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 2 00:51:40.143: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:51:40.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4874" for this suite. STEP: Destroying namespace "webhook-4874-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.486 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":265,"skipped":4587,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:51:40.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 00:51:40.307: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:51:40.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3378" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":275,"completed":266,"skipped":4622,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:51:41.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 2 00:51:45.229: INFO: Waiting up to 5m0s for pod "client-envvars-6f7bc551-3409-4745-a4d7-8abc981f4e8d" in namespace "pods-449" to be "Succeeded or Failed" Apr 2 00:51:45.238: INFO: Pod "client-envvars-6f7bc551-3409-4745-a4d7-8abc981f4e8d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.988383ms Apr 2 00:51:47.263: INFO: Pod "client-envvars-6f7bc551-3409-4745-a4d7-8abc981f4e8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034568816s Apr 2 00:51:49.268: INFO: Pod "client-envvars-6f7bc551-3409-4745-a4d7-8abc981f4e8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039256095s STEP: Saw pod success Apr 2 00:51:49.268: INFO: Pod "client-envvars-6f7bc551-3409-4745-a4d7-8abc981f4e8d" satisfied condition "Succeeded or Failed" Apr 2 00:51:49.271: INFO: Trying to get logs from node latest-worker pod client-envvars-6f7bc551-3409-4745-a4d7-8abc981f4e8d container env3cont: STEP: delete the pod Apr 2 00:51:49.293: INFO: Waiting for pod client-envvars-6f7bc551-3409-4745-a4d7-8abc981f4e8d to disappear Apr 2 00:51:49.304: INFO: Pod client-envvars-6f7bc551-3409-4745-a4d7-8abc981f4e8d no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:51:49.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-449" for this suite. • [SLOW TEST:8.308 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":267,"skipped":4632,"failed":0} SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:51:49.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-612 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Apr 2 00:51:49.466: INFO: Found 0 stateful pods, waiting for 3 Apr 2 00:51:59.470: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 2 00:51:59.470: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 2 00:51:59.470: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Apr 2 00:52:09.470: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 2 00:52:09.471: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 2 00:52:09.471: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 2 00:52:09.500: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 2 00:52:19.554: INFO: Updating stateful set ss2 Apr 2 00:52:19.605: INFO: Waiting for Pod statefulset-612/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 2 00:52:29.613: INFO: Waiting for Pod statefulset-612/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Apr 2 00:52:39.763: INFO: Found 2 stateful pods, waiting for 3 Apr 2 00:52:49.767: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 2 00:52:49.767: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 2 00:52:49.767: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 2 00:52:49.792: INFO: Updating stateful set ss2 Apr 2 00:52:49.799: INFO: Waiting for Pod statefulset-612/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 2 00:52:59.807: INFO: Waiting for Pod statefulset-612/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 2 00:53:09.825: INFO: Updating stateful set ss2 Apr 2 00:53:09.881: INFO: Waiting for StatefulSet statefulset-612/ss2 to complete update Apr 2 00:53:09.881: INFO: Waiting for Pod statefulset-612/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 2 00:53:19.889: INFO: Waiting for StatefulSet statefulset-612/ss2 to complete update Apr 2 00:53:19.889: INFO: Waiting for Pod statefulset-612/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 2 00:53:29.889: INFO: Deleting all statefulset in ns statefulset-612 Apr 2 00:53:29.892: INFO: Scaling statefulset ss2 to 0 Apr 2 00:53:49.911: INFO: Waiting for statefulset status.replicas updated to 0 Apr 2 00:53:49.920: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:53:49.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-612" for this suite. • [SLOW TEST:120.617 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":268,"skipped":4640,"failed":0} SSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:53:49.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 2 00:53:50.026: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 2 00:53:55.029: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:53:55.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2389" for this suite. • [SLOW TEST:5.216 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":269,"skipped":4646,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:53:55.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-5a834b59-cba4-43bd-8fca-4010dfe4ac59 STEP: Creating a pod to test consume secrets Apr 2 00:53:55.297: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2d263020-7dda-43ec-8d73-b5e81e22764b" in namespace "projected-1169" to be "Succeeded or Failed" Apr 2 00:53:55.358: INFO: Pod "pod-projected-secrets-2d263020-7dda-43ec-8d73-b5e81e22764b": Phase="Pending", Reason="", readiness=false. Elapsed: 60.388381ms Apr 2 00:53:57.362: INFO: Pod "pod-projected-secrets-2d263020-7dda-43ec-8d73-b5e81e22764b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064058636s Apr 2 00:53:59.366: INFO: Pod "pod-projected-secrets-2d263020-7dda-43ec-8d73-b5e81e22764b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06850975s STEP: Saw pod success Apr 2 00:53:59.366: INFO: Pod "pod-projected-secrets-2d263020-7dda-43ec-8d73-b5e81e22764b" satisfied condition "Succeeded or Failed" Apr 2 00:53:59.369: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-2d263020-7dda-43ec-8d73-b5e81e22764b container projected-secret-volume-test: STEP: delete the pod Apr 2 00:53:59.439: INFO: Waiting for pod pod-projected-secrets-2d263020-7dda-43ec-8d73-b5e81e22764b to disappear Apr 2 00:53:59.444: INFO: Pod pod-projected-secrets-2d263020-7dda-43ec-8d73-b5e81e22764b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:53:59.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1169" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":270,"skipped":4654,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:53:59.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:53:59.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-181" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":271,"skipped":4655,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:53:59.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 2 00:54:00.324: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 2 00:54:02.335: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721385640, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721385640, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721385640, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721385640, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 2 00:54:05.372: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:54:05.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3303" for this suite. STEP: Destroying namespace "webhook-3303-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.011 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":272,"skipped":4696,"failed":0} SSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:54:05.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0402 00:54:07.058751 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 2 00:54:07.058: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:54:07.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-838" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":273,"skipped":4699,"failed":0} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:54:07.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-9891 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating statefulset ss in namespace statefulset-9891 Apr 2 00:54:07.145: INFO: Found 0 stateful pods, waiting for 1 Apr 2 00:54:17.148: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 2 00:54:17.211: INFO: Deleting all statefulset in ns statefulset-9891 Apr 2 00:54:17.215: INFO: Scaling statefulset ss to 0 Apr 2 00:54:37.276: INFO: Waiting for statefulset status.replicas updated to 0 Apr 2 00:54:37.279: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:54:37.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9891" for this suite. • [SLOW TEST:30.237 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":274,"skipped":4702,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 2 00:54:37.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 2 00:54:41.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6323" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":275,"skipped":4705,"failed":0} SSSSSSSSSSSSApr 2 00:54:41.536: INFO: Running AfterSuite actions on all nodes Apr 2 00:54:41.536: INFO: Running AfterSuite actions on node 1 Apr 2 00:54:41.536: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":275,"completed":275,"skipped":4717,"failed":0} Ran 275 of 4992 Specs in 4667.277 seconds SUCCESS! -- 275 Passed | 0 Failed | 0 Pending | 4717 Skipped PASS